How AI Can Turn Thoughts Into Images and Why You Should Care

How Artificial Intelligence Can Transform Ideas Into Pictures and Why You Should Care?

Based on your brain waves, computers might soon be able to figure out what you see. Researchers recently showed that artificial intelligence (AI) could read brain scans and make copies of images that a person has seen. The new study adds to the growing worries that AI might invade people’s privacy.

Kevin Gordon, vice president of AI Technologies at NexOptic, told Lifewire in an email interview that ChatGPT is still the latest AI craze. However, privacy experts have pointed out that the tool was trained with pretty much anything scraped off the internet, including our personal information.

“This was done without anyone’s knowledge or permission. No one was even told what was going on. And because ChatGPT works with prompts, there are worries that users may accidentally include personal information in these prompts, which are then saved in ChatGPT’s database.”

The Things that Matter

In the new study, the researchers used an algorithm called “Stable Diffusion,” which is similar to “Generative” AIs like DALL-E 2 that turn text into images. The software can make new images based on what you type. By adding photo captions to the algorithm, the team cut down on the time it took to train each participant.

The researchers wrote on their website, “This paper shows that it is possible to decode (or create) images from brain activity by combining visual structural information decoded from activity in the early visual cortex with semantic features decoded from activity in higher-order areas and directly mapping the decoded information to the internal representations of a latent diffusion model (LDM; Stable Diffusion) without fine-tuning.”

Also Read: What Exactly is Google Bard? All the information you need about Google’s AI Chatbot

AI could also invade privacy in other ways than what the new study shows. Harold Li, a vice president at the security company ExpressVPN, said in an email that AI technology is causing worry because it analyses data and gets better at understanding the world and its users over time.

“Artificial intelligence makes technologies that we use every day,” Li said. “Notice how your phone’s autocorrect starts to understand words that aren’t in the dictionary just because you use them a lot? This is called machine learning.”

Richard Watson-Bruhn, US Head of Digital Trust & Cyber Security at PA Consulting, said in an email interview that AI also makes it easier to gather more personal and sensitive data. He used the fact that police are getting more and more videos as proof of this trend.

“Now that AI can collect and analyze video, it is much easier and more common for video data to be collected, possibly without your knowledge,” he said. “This collection by itself is bad because it takes away the privacy we used to have.”

How AI Can Turn Thoughts Into Images and Why You Should Care

Watson-Bruhn said that as the use of AI grows, so does the need for data to build AI models. “This makes it easier and more likely that your personal information will be combined and used without your knowledge in ways you would never agree with,” he said.

“There are many examples of this, but perhaps the most well-known is how Cambridge Analytica used data to try to change the outcome of US elections, even though they never told anyone how they were going to use the data when they got it. The fact that your information could be used wrongly, either on purpose or by accident, is another bad thing.”

Also Read: Razer Blade 15 2018 H2: A Laptop for Gaming with Razor Blade 2018 Technology

Regulating AI

Some experts say that regulation might be the only way to stop AI from collecting too much personal information about users. The EU’s General Data Protection Regulation is an example of a law that tries to regulate AI. It gives each person the right to have another human explain the reasoning behind any legal or other important decision made by an AI.

Asif Savvas, the Chief Product Officer at Simeio, said in an email interview, “For the time being, most of the responsibility for protecting privacy falls on the companies that collect data about user identities.”

The National Cybersecurity Strategy, which was released by the White House this month, includes an AI Bill of Rights to help businesses use AI in a good way.

“But it’s more of a set of rules than a law, and it’s hard to follow,” Li said. “At the same time, privacy is a basic human right, and anything that threatens it, like AIs, must be regulated.”

Anjali Arora

Anjali works on KeyManagement Insight as a Content Editor. She reports the latest news and gossip about celebrities in the entertainment industry. Furthermore, her interests lie majorly in Sketching, traveling, and adventurous activities.

Post navigation

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *