October 2, 2022

stickyriceles

Software Development

As AI language skills grow, so do scientists’ concerns

[ad_1]

The tech industry’s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to be a sentient computer, or maybe just a dinosaur or squirrel. But they’re not so good — and sometimes dangerously bad — at handling other seemingly straightforward tasks.

Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings. It’s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.

Among other things, GPT-3 can write up most any text you ask for — a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars. But when Pomona College professor Gary Smith asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.

“Yes, it is safe to walk upstairs on your hands if you wash them first,” the AI replied.

These powerful and power-chugging AI systems, technically known as “large language models” because they’ve been trained on a huge body of text and other media, are already getting baked into customer service chatbots, Google searches and “auto-complete” email features that finish your sentences for you. But most of the tech companies that built them have been secretive about their inner workings, making it hard for outsiders to understand the flaws that can make them a source of misinformation, racism and other harms.

“They’re very good at writing text with the proficiency of human beings,” said Teven Le Scao, a research engineer at the AI startup Hugging Face. “Something they’re not very good at is being factual. It looks very coherent. It’s almost true. But it’s often wrong.”

That’s one reason a coalition of AI researchers co-led by Le Scao — with help from the French government — launched a new large language model July 12 that’s supposed to serve as an antidote to closed systems such as GPT-3. The group is called BigScience and their model is BLOOM, for the BigScience Large Open-science Open-access Multilingual Language Model. Its main breakthrough is that it works across 46 languages, including Arabic, Spanish and French — unlike most systems that are focused on English or Chinese.

It’s not just Le Scao’s group aiming to open up the black box of AI language models. Big Tech company Meta, the parent of Facebook and Instagram, is also calling for a more open approach as it tries to catch up to the systems built by Google and OpenAI, the company that runs GPT-3.

“We’ve seen announcement after announcement after announcement of people doing this kind of work, but with very little transparency, very little ability for people to really look under the hood and peek into how these models work,” said Joelle Pineau, managing director of Meta AI.

Competitive pressure to build the most eloquent or informative system — and profit from its applications — is one of the reasons that most tech companies keep a tight lid on them and don’t collaborate on community norms, said Percy Liang, an associate computer science professor at Stanford who directs its Center for Research on Foundation Models.

“For some companies this is their secret sauce,” Liang said. But they are often also worried that losing control could lead to irresponsible uses. As AI systems are increasingly able to write health advice websites, high school term papers or political screeds, misinformation can proliferate and it will get harder to know what’s coming from a human or a computer.

[ad_2]

Source link