News:

Mikäli foorumi ei jostain syystä vastaa, paras paikka löytää ajantasaista tietoa on Facebookin Hommasivu,
https://www.facebook.com/Hommaforum/
Sivun lukeminen on mahdollista myös ilman FB-tiliä.

Main Menu

2016-04-04 HS: Digivahti lupaa siivota nettipalstojen saastan

Started by VeePee, 04.04.2016, 08:00:25

Previous topic - Next topic

foobar

Quote from: sivullinen. on 15.02.2017, 13:01:25
Quote from: foobar on 15.02.2017, 07:44:53
Harkitsen ihan vakavissani sellaisen hermoverkon rakentamista ja opettamista joka osaisi sanoa, onko jokin teksti Hommalla pidetty vai ei.

Harkitsetko ihan vakavissasi vai sanotko vain harkitsevasi ihan vakavissasi?

Harkitsen. Käytettävissä oleva aika ja energia tahtovat vain rajoittaa moisten projektien toteutumista.
"Voi sen sanoa, paitsi ettei oikein voi, koska sillä antaa samalla avoimen valtakirjan EU:ssa tapahtuvalle mielivallalle."
- ApuaHommmaan siitä, voiko sanoa Venäjän tekevän Ukrainassa siviilien kidutusmurhia ja voiko ne tuomita.

Lady Deadpool

The Guardianilta pari juttua tältä kuulta siitä miten tekoälyt ovat rasistisia ja seksistisiä.

https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals (13.4.2017)

(Uutisessa olevat linkit:

http://science.sciencemag.org/content/356/6334/183 (14.4.2017)
https://www.theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence (21.5.2015)
https://implicit.harvard.edu/implicit/
https://www.theguardian.com/technology/2017/jan/27/ai-artificial-intelligence-watchdog-needed-to-prevent-discriminatory-automated-decisions (27.1.2017))

QuoteAI programs exhibit racial and gender biases, research reveals

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say.

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: "A lot of people are saying this is showing that AI is prejudiced. No. This is showing we're prejudiced and that AI is learning it."

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. "A danger would be if you had an AI system that didn't have an explicit part that was driven by moral ideas, that would be bad," she said.

The research, published in the journal Science, focuses on a machine learning tool known as "word embedding", which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

"A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language," said Arvind Narayanan, a computer scientist at Princeton University and the paper's senior author.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical "language space", words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words "female" and "woman" were more closely associated with arts and humanities occupations and with the home, while "male" and "man" were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as "gift" or "happy", while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.


These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidate's name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

"If you didn't believe that there was racism associated with people's names, this shows it's there," said Bryson.

The machine learning tool used in the study was trained on a dataset known as the "common crawl" corpus – a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: "The world is biased, the historical data is biased, hence it is not surprising that we receive biased results."

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

"At least with algorithms, we can potentially know when the algorithm is biased," she said. "Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us."

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

"We can, in principle, build systems that detect biased decision-making, and then act on it," said Wachter, who along with others has called for an AI watchdog to be established. "This is a very complicated task, but it is a responsibility that we as society should not shy away from."

https://www.theguardian.com/commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language (20.4.2017)

(Uutisen linkit:

https://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour (14.8.2013)
https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter (24.3.2016)
https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app (1.7.2015))

QuoteRobots are racist and sexist. Just like the people who created them

Machines learn their prejudices in language. It's not their fault, but we still need to fix the problem.

Can machines think – and, if so, can they think critically about race and gender? Recent reports have shown that machine-learning systems are picking up racist and sexist ideas embedded in the language patterns they are fed by human engineers. The idea that machines can be as bigoted as people is an uncomfortable one for anyone who still believes in the moral purity of the digital future, but there's nothing new or complicated about it. "Machine learning" is a fancy way of saying "finding patterns in data". Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data "has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people's lives you end up with unacceptable discrimination."

Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. As long ago as 1986, the medical school at St George's hospital in London was found guilty of racial and sexual discrimination when it automated its admissions process based on data collected in the 1970s. The program looked at the sort of candidates who had been successful in the past, and gave similar people interviews. Unsurprisingly, the people the computer considered suitable were male, and had names that looked Anglo-Saxon.

Automation is a great excuse for assholery – after all, it's just numbers, and the magic of "big data" can provide plausible deniability for prejudice. Machine learning, as the technologist Maciej Cegłowski observed, can function in this way as "money laundering" for bias.

This is a problem, and it will become a bigger problem unless we take active measures to fix it. We are moving into an era when "smart" machines will have more and more influence on our lives. The moral economy of machines is not subject to oversight in the way that human bureaucracies are. Last year Microsoft created a chatbot, Tay, which could "learn" and develop as it engaged with users on social media. Within hours it had pledged allegiance to Hitler and started repeating "alt-right" slogans – which is what happens when you give Twitter a baby to raise. Less intentional but equally awkward instances of robotic intolerance keep cropping up, as when one Google image search using technology "trained" to recognise faces based on images of Caucasians included African-American people among its search results for gorillas.

These, however, are only the most egregious examples. Others – ones we might not notice on a daily basis – are less likely to be spotted and fixed. As more of the decisions affecting our daily lives are handed over to automatons, subtler and more insidious shifts in the way we experience technology, from our dealings with banks and business to our online social lives, will continue to be based on the baked-in bigotries of the past – unless we take steps to change that trend.

Should we be trying to build robots with the capacity for moral judgment? Should technologists be constructing AIs that can implement basic assessments about justice and fairness? I have a horrible feeling I've seen that movie, and it doesn't end well for human beings. There are other frightening futures, however, and one of them is the society where we allow the weary bigotries of the past to become written into the source code of the present.

Machines learn language by gobbling up and digesting huge bodies of all the available writing that exists online. What this means is that the voices that dominated the world of literature and publishing for centuries – the voices of white, western men – are fossilised into the language patterns of the instruments influencing our world today, along with the assumptions those men had about people who were different from them. This doesn't mean robots are racist: it means people are racist, and we're raising robots to reflect our own prejudices.

Human beings, after all, learn our own prejudices in a very similar way. We grow up understanding the world through the language and stories of previous generations. We learn that "men" can mean "all human beings", but "women" never does – and so we learn that to be female is to be other – to be a subclass of person, not the default. We learn that when our leaders and parents talk about how a person behaves to their "own people", they sometimes mean "people of the same race" – and so we come to understand that people of a different skin tone to us are not part of that "we". We are given one of two pronouns in English – he or she – and so we learn that gender is a person's defining characteristic, and there are no more than two. This is why those of us who are concerned with fairness and social justice often work at the level of language – and why when people react to having their prejudices confronted, they often complain about "language policing", as if the use of words could ever be separated from the worlds they create.

Language itself is a pattern for predicting human experience. It does not just describe our world – it shapes it too. The encoded bigotries of machine learning systems give us an opportunity to see how this works in practice. But human beings, unlike machines, have moral faculties – we can rewrite our own patterns of prejudice and privilege, and we should.

Sometimes we fail to be as fair and just as we would like to be – not because we set out to be bigots and bullies, but because we are working from assumptions we have internalised about race, gender and social difference. We learn patterns of behaviour based on bad, outdated information. That doesn't make us bad people, but nor does it excuse us from responsibility for our behaviour. Algorithms are expected to update their responses based on new and better information, and the moral failing occurs when people refuse to do the same. If a robot can do it, so can we.
Sarjavihaaja.

Niobium

Koko tämä kohkaus tuo mieleen PID-säätimen. Muutat yhtä parametria ja koko systeemi ajautuu kaaokseen kun vaikutukset kertautuvat pahimmillaan loppuprosessin PID-säätimien suosiollisella avustuksella.

Jonakin päivänä kirjoitan vaikka sanat "musta tuntuu" samalla foorumilla, jolla käsitellään mustien tulppaaneiden jalostusta. Siitä onkin sitten onkin lyhyt loikka Hollannin siirtomaa-aikaan.

Yksi perhosen siivenisku ja sitten oletkin jo kuulusteluissa.
"Varmaan jokaisen venäläisen äidin suurin unelma on synnyttää lisää lapsia tulevaisuuden juoksuhautoihin laittamaan käsikranaatti leuan alle ja vetämään sokka irti. Korvaukseksi saa säkillisen perunoita. " (Jäsen Hohtis.)

b_kansalainen

Quote from: VeePee on 04.04.2016, 08:00:25
HS lupailee automaattisen keskustelunvalvonnan saapumaan pian siivoamaan nettipalstojen "saastan".
Vuosi on kulunut. Joko saasta on siivottu pois Hesarista? Ei tainnut mennä niin kuin Stromsöössä. Henkilökohtaisesti en ole yllättynyt. Tekoäly ei vaan pärjää ihmisen luovuudelle.