Normal Topic Artificial Intelligence (Read 332 times)
Display Name
Justice Leaguer
*****
Offline


I Love V&V!

Posts: 1851
Joined: Jul 20th, 2010
Gender: Male
Artificial Intelligence
Jun 15th, 2019 at 12:24pm
Print Post  
I've been telling people for years that paranormals (ie: super-heroes and super-villains) will soon become real, but so far no one would believe me!?

https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-c...

This clever AI hid data from its creators to cheat at its appointed task
Devin Coldewey@techcrunch

Depending on how paranoid you are, this research from Stanford and Google  will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

(Images)

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:

(Images)

The colorful maps in (c) are a visualization of the slight differences the computer systematically introduced. You can see that they form the general shape of the aerial map, but you’d never notice it unless it was carefully highlighted and exaggerated like this.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

This is really just a lesson in the oldest adage in computing: PEBKAC. “Problem exists between keyboard and chair.” (Not “and computer,” as I accidentally wrote before, obviously. That would imply a faulty cable or wireless interface. Thanks to everyone on the internet for pointing it out.) Or as HAL put it: “It can only be attributable to human error.”

The paper, “CycleGAN, a Master of Steganography,” was presented at the Neural Information Processing Systems conference in 2017. Thanks to Fiora Esoterica and Reddit for bringing this old but interesting paper to my attention.
  
Back to top
 
IP Logged
 
Display Name
Justice Leaguer
*****
Offline


I Love V&V!

Posts: 1851
Joined: Jul 20th, 2010
Gender: Male
Re: Artificial Intelligence
Reply #1 - Jun 15th, 2019 at 12:28pm
Print Post  
https://www.express.co.uk/news/science/1056533/ai-warning-brain-transparency-los...

‘Brain TRANSPARENCY’ AI expert warns against LOSING JOBS over THOUGHTS

By CALLUM HOARE
PUBLISHED: 17:22, Sun, Dec 9, 2018 | UPDATED: 22:56, Sun, Dec 9, 2018

Nita Farahany has detailed her fears artificial intelligence in the workplace could lead to the loss of jobs over employees thoughts. She revealed how more and more companies are looking into the idea of making electroencephalography (EEG) devices a compulsory part of their uniform. The wearable headset, which can be used to monitor alertness, productivity and mental state, is already being used in China.

Train drivers on the Beijing – Shanghai high-speed rail are required to wear the technology, and, according to some reports, in government-run factories in China, the employees are required to wear EEG sensors to monitor their productivity too.

Workers are even sent home if their brains show less-than-stellar concentration on their jobs or emotional agitation.

Ms Farahany fears the increasing number of worldwide interest in the technology could lead to people being fired just for their thoughts.

She asked at a recent TedTalks event: “In a world of total brain transparency, who would dare have a political dissident or creative thought?

(Image)

“I worry that people will self-censor in fear of being ostracised by society, or that people will lose their jobs because of their waning attention or emotional instability, or because they're contemplating collective action against their employers.

“That coming out will no longer be an option, because people's brains will long ago have revealed their sexual orientation, their political ideology or their religious preferences, well before they were ready to consciously share that information with other people.

“I worry about the ability of our laws to keep up with technological change. Take the First Amendment of the US Constitution, which protects freedom of speech. Does it also protect freedom of thought?”

(Image and video)

Ms Farahany, who is a professor in law and philosophy, was speaking at a TedTalks event in November 2018.

As an Iranian–American citizen, she was inspired to study brain activity after the 2009 presidential election protests in Iran.

She revealed how when she called her parents during the violent crackdowns, they would be too scared to tell her the truth about what was going on in case the government heard.

Then her fears increased when she contemplated the possibility of officials being able to read their thoughts.
  
Back to top
 
IP Logged
 
dsumner
Justice Leaguer
*****
Offline


Oppresser of worlds

Posts: 5284
Location: On High
Joined: Apr 20th, 2009
Gender: Male
Re: Artificial Intelligence
Reply #2 - Jun 16th, 2019 at 9:07pm
Print Post  
Interesting stuff.
  

"There is no such things as a dangerous weapon, only dangerous men."

"Nemo me impune lacessit"
Back to top
YIM  
IP Logged
 
 
>