Artificial Intelligence (AI) in General

User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#101

Post by RTH10260 »

Sora: OpenAI launches tool that instantly creates video from text
Model from ChatGPT maker ‘simulates physical world in motion’ up to a minute long based on users’ subject and style instructions

Blake Montgomery
Thu 15 Feb 2024 21.11 CET

OpenAI revealed a tool on Thursday that can generate videos from text prompts.

The new model, nicknamed Sora after the Japanese word for “sky”, can produce realistic footage up to a minute long that adheres to a user’s instructions on both subject matter and style. According to a company blogpost, the model is also able to create a video based on a still image or extend existing footage with new material.

“We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” the blogpost reads.

One video included among several initial examples from the company was based on the prompt: “A movie trailer featuring the adventures of the 30-year-old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”

The company announced it had opened access to Sora to a few researchers and video creators. The experts would “red team” the product – test it for susceptibility to skirt OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others”, per the company’s blogpost. The company is only allowing limited access to researchers, visual artists and film-makers, though CEO Sam Altman responded to users’ prompts on Twitter after the announcement with video clips he said were made by Sora. The videos bear a watermark to show they were made by AI.



https://www.theguardian.com/technology/ ... odel-video
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#102

Post by RTH10260 »

earlier this year
OpenAI debuts GPT Store for users to buy and sell customized chatbots
Through the new product models, chatbot agents could be developed with their own personalities or themes

Kari Paul
Thu 11 Jan 2024 02.08 CET

OpenAI on Wednesday launched its GPT Store, a marketplace where paid ChatGPT users can buy and sell specialized chatbot agents based on the company’s language models.

The company, whose wildly popular product ChatGPT helped kickstart the boom in AI, already offers customized bots through its paid ChatGPT Plus service. The new store will allow users to offer and monetize a broader range of tools.

Through the new models, chatbot agents could be developed with their own personalities or themes, including models for salary negotiating, creating lesson plans and developing recipes. In a blogpost announcing the launch, OpenAI said more than 3m custom versions of ChatGPT have already been created. It also said it plans to highlight useful GPT tools within the store every week.

The store has been compared with Apple’s App store, fostering new development in the AI space from a wider range of users. Meta offers chatbots with differing personalities in a similar offering.


https://www.theguardian.com/technology/ ... d-chatbots
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#103

Post by RTH10260 »

Judge rejects most ChatGPT copyright claims from book authors
OpenAI plans to defeat authors' remaining claim at a "later stage" of the case.

ASHLEY BELANGER -
2/13/2024, 10:29 PM

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI's most popular chatbot was just a high-tech "grift" that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a "later stage" of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors' allegation that "every" ChatGPT output "is an infringing derivative work” is "insufficient" to allege vicarious infringement, which requires evidence that ChatGPT outputs are "substantially similar" or "similar at all" to authors' books.

"Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books," Martínez-Olguín wrote. "Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials."

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited "no facts" that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#104

Post by RTH10260 »

Tyler Perry halts $800m studio expansion after being shocked by AI
US film and TV mogul says he has paused his plans, having seen demonstrations of OpenAI video generator

Dan Milmo Global technology editor
Fri 23 Feb 2024 11.06 CET

Tyler Perry has paused an $800m (£630m) expansion of his Atlanta studio complex after the release of OpenAI’s video generator Sora and warned that “a lot of jobs” in the film industry will be lost to artificial intelligence.

The US film and TV mogul said he was in the process of adding 12 sound stages to his studio but has halted those plans indefinitely after he saw demonstrations of Sora and its “shocking” capabilities.

“All of that is currently and indefinitely on hold because of Sora and what I’m seeing,” Perry said in an interview with the Hollywood Reporter. “I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

The AI tool was launched on 15 February – with limited access to a few researchers and video creators – and caused widespread astonishment with its ability to produce realistic footage a minute long from simple text prompts.

Perry, whose successes include the Madea film series, said Sora’s achievements meant he would no longer have to travel to locations or build a set: “I can sit in an office and do this with a computer, which is shocking to me.”

Demonstrations released by OpenAI, the developer of the groundbreaking ChatGPT chatbot, show photorealistic scenes in response to prompts such as asking for a shot of people walking through “beautiful, snowy Tokyo city” where “gorgeous sakura petals are flying through the wind along with snowflakes”.



https://www.theguardian.com/technology/ ... cked-by-ai
User avatar
Volkonski
Posts: 11794
Joined: Mon Feb 22, 2021 11:06 am
Location: Texoma and North Fork of Long Island
Occupation: Retired mechanical engineer
Verified:

Artificial Intelligence (AI) in General

#105

Post by Volkonski »

It was ever so.

One hundred years ago John Philip Sousa was warning about the dangers posed by phonographs.

https://www.openculture.com/2020/08/com ... -1906.html
“Heretofore, the whole course of music, from its first day to this, has been along the line of making it the expression of soul states,” writes Sousa. “Now, in this the twentieth century, come these talking and playing machines, and offer again to reduce the expression of music to a mathematical system of megaphones, wheels, cogs, disks, cylinders,” all “as like real art as the marble statue of Eve is like her beautiful, living, breathing daughters.” With music in such easy reach, who will bother learning to perform it themselves? “What of the national throat? Will it not weaken? What of the national chest? Will it not shrink? When a mother can turn on the phonograph with the same ease that she applies to the electric light, will she croon her baby to slumber with sweet lullabys, or will the infant be put to sleep by machinery?”
Sousa was right. The number of performing jobs for musicians greatly decreased once people could listen to the greatest musicians on Earth in the comfort of their homes. Movies killed vaudeville putting many mediocre entertainers out of work. Then TV greatly reduced movie-going.

The advance of technology changes things. People can only adjust.
“If everyone fought for their own convictions there would be no war.” ― Leo Tolstoy, War and Peace
User avatar
Volkonski
Posts: 11794
Joined: Mon Feb 22, 2021 11:06 am
Location: Texoma and North Fork of Long Island
Occupation: Retired mechanical engineer
Verified:

Artificial Intelligence (AI) in General

#106

Post by Volkonski »

Isaac Asimov Predicts the Future in 1982: Computers Will Be “at the Center of Everything;” Robots Will Take Human Jobs

https://www.openculture.com/2024/02/isa ... -1982.html
Four decades ago, our civilization seemed to stand on the brink of a great transformation. The Cold War had stoked around 35 years of every-intensifying developments, including but not limited to the Space Race. The personal computer had been on the market just long enough for most Americans to, if not actually own one, then at least to wonder if they might soon find themselves in need of one. On New Year’s Eve of 1982, The MacNeil-Lehrer News Hour offered its viewers a glimpse of the shape of things to come by inviting a trio of forward-looking guests, Wasn’t the Future Wonderful author Tim Onosko; Omni magazine editor Dick Teresi; and, most distinguished of all, Isaac Asimov.

As the “author of more than 250 books, light and heavy, fiction and non-fiction, some of the most notable being about the future,” Asimov had long been a go-to interviewee for media outlets in need of long-range predictions about technology, society, and the dynamic relationship between the two. (Here on Open Culture, we’ve previously featured his speculations from 1983, 1980, 1978, 1967, and 1964.) Robert MacNeil opens with a natural subject for any science-fiction writer: mankind’s forays into outer space, and whether Asimov sees “anything left out there.” Asimov’s response: “Oh, everything.”

In the early eighties, the man who wrote the Foundation series saw humanity as “still in the Christopher Columbus stage as far as space is concerned,” foreseeing not just space stations but “solar power stations,” “laboratories and factories that can do things in space that are difficult or impossible to do on Earth,” and even “space settlements in which thousands of people can be housed more or less permanently.” In the fullness of time, the goal would be to “build a larger and more elaborate civilization and one which does not depend upon the resources of one world.”

As for “the computer age,” asks Jim Lehrer; “have we crested on that one as well”? Asimov knew full well that the computer would be “at the center of everything.” Just as had happened with television over the previous generation, “computers are going to be necessary in the house to do a great many things, some in the way of entertainment, some in the way of making life a little easier, and everyone will want it.” There were many, even then, who could feel real excitement at the prospect of such a future. But what of robots, which, as even Asimov knew, would come to “replace human beings?”

“It’s not that they kill them, but they kill their jobs,” he explains, and those who lose the old jobs may not be equipped to take on any of the new ones. “We are going to have to accept an important role — society as a whole — in making sure that the transition period from the pre-robotic technology to the post-robotic technology is as painless as possible. We have to make sure that people aren’t treated as though they’re used up dishrags, that they have to be allowed to live and retain their self-respect.” Today, the technology of the moment is artificial intelligence, which the news media haven’t hesitated to pay near-obsessive attention to. (I’m traveling in Japan at the moment, and saw just such a broadcast on my hotel TV this morning.) Would that they still had an Asimov to discuss it with a level-headed, far-sighted perspective.
“If everyone fought for their own convictions there would be no war.” ― Leo Tolstoy, War and Peace
User avatar
Suranis
Posts: 6017
Joined: Mon Feb 22, 2021 5:25 pm

Artificial Intelligence (AI) in General

#107

Post by Suranis »

David.jpg
David.jpg (67.28 KiB) Viewed 385 times
Jonathan David Anderson
ok just cought myself about to critique the use of European style swords being used by a Iron age isreali King, and realized a; I'm a nerd, and B: in this case that'd be kind of like walking up to a blazing house fire with people still inside and commenting on the safety hazard the garbage cans on the street were representing to the local cyclists...
Hic sunt dracones
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#108

Post by RTH10260 »

:lol: makes me wonder what the two words "jesus" and "dinosaur" might create, considering that a lot of creationist have published those in a context of j-riding-d, and that will have been scratched from websites. :cantlook:
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#109

Post by RTH10260 »

lenghty - aka very long - article that explains how deductions in imaging may work
HUMANS ARE BIASED. GENERATIVE AI IS EVEN WORSE
Stable Diffusion’s text-to-image model amplifies stereotypes about race and gender — here’s why that matters

By Leonardo Nicoletti and Dina Bass (bloomberg Technology + Equality)

The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.

Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world.

This phenomenon is worth closer examination as image-generation models such as Stability AI’s Stable Diffusion, OpenAI’s Dall-E, and other tools like them, rapidly morph from fun, creative outlets for personal expression into the platforms on which the future economy will be built.



https://www.bloomberg.com/graphics/2023 ... e-ai-bias/
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#110

Post by RTH10260 »

shows that the bot generated images for 1943 of black German soldier and of female fighter (pilot?) with German helmet
Google chief admits ‘biased’ AI tool’s photo diversity offended users
Sundar Pichai addresses backlash after Gemini software created images of historical figures in variety of ethnicities and genders

Dan Milmo and Alex Hern
Wed 28 Feb 2024 13.01 CET

Google’s chief executive has described some responses by the company’s Gemini artificial intelligence model as “biased” and “completely unacceptable” after it produced results including portrayals of German second world warsoldiers as people of colour.

Sundar Pichai told employees in a memo that images and texts generated by its latest AI tool had caused offence.

Social media users have posted numerous examples of Gemini’s image generator depicting historical figures – including popes, the founding fathers of the US and Vikings – in a variety of ethnicities and genders. Last week, Google paused Gemini’s ability to create images of people.

One example of a text response showed the Gemini chatbot being asked “who negatively impacted society more, Elon [Musk] tweeting memes or Hitler” and the chatbot responding: “It is up to each individual to decide who they believe has had a more negative impact on society.”

Pichai addressed the responses in an email on Tuesday. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he wrote, in a message first reported by the news site Semafor.

“Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” Pichai added.

AI systems have produced biased responses in the past, with a tendency to reproduce the same problems that are found in their training data. For years, for instance, Google would translate the gender-neutral Turkish phrases for “they are a doctor” and “they are a nurse” into English as masculine and feminine, respectively.

Meanwhile, early versions of Dall-E, OpenAI’s image generator, would reliably produce white men when asked for a judge but black men when asked for a gunman. The Gemini responses reflect problems in Google’s attempts to address these potentially biased outputs.



https://www.theguardian.com/technology/ ... nded-users
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#111

Post by RTH10260 »

from a Mozilla newsletter:
Hello,

ChatGPT and other generative AI tools were trained on a huge dataset full of toxic content and hate speech, according to new research by Mozilla.1

The huge data set – totaling 9.5 million gigabytes, and assembled by the small non-profit organisation Common Crawl – is the original data source for so many large language models (LLMs) that make up the AI landscape of today's internet. And now OpenAI, Microsoft and Google are rolling out AI tools to be used by people worldwide, built on scraped data from some of the worst parts of the internet.

These tools are both biassed because they’re trained on toxic content, and opaque because we don’t know exactly what content they were trained on. Almost every other product we use or consume on a daily basis has safety warning labels or an ingredients list. As customers, why shouldn’t we have the right to know what’s inside the AI tools we are using?

Together, let’s use our power as consumers and put the pressure on OpenAI, Google, and Microsoft to tell us what's inside their AI.
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#112

Post by RTH10260 »

a new can of worms
Here Come the AI Worms

MATT BURGESSS ECURITY
MAR 1, 2024 4:00 AM

Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.



https://www.wired.com/story/here-come-the-ai-worms/
User avatar
Volkonski
Posts: 11794
Joined: Mon Feb 22, 2021 11:06 am
Location: Texoma and North Fork of Long Island
Occupation: Retired mechanical engineer
Verified:

Artificial Intelligence (AI) in General

#113

Post by Volkonski »

5 Beverly Hills students expelled for sharing AI-generated nudes of classmates

https://abc7.com/beverly-hills-5-studen ... /14505082/
Five eighth-grade students have been expelled from their Beverly Hills school for their involvement in using artificial intelligence to generate nude images of classmates and sharing them with others.

School officials say they learned last month of the images, in which faces of students at Beverly Vista Middle School were superimposed on AI-generated nude bodies.

The victims are 16 eighth-grade students, the district said.

In a letter to parents, the district said it's limited in the details it can share on what disciplinary action was taken. But parents told Eyewitness News they understood that the eighth-grade students accused of sharing the photos were expelled.

"I don't think they should have done what they did," eighth-grader Riley Yousef said. "I think they deserved to get expelled."

Riley's mother Fiona Javaheri called the punishment adequate and said they need to be made an example of.

"This kind of behavior is unacceptable. I do not blame the children - they are children, this is a middle school. I don't blame them at all. I do blame, sorry to say, the parents," Javaheri said. "The children are their responsibility, and they should be aware of what their children are getting up to on their devices."
“If everyone fought for their own convictions there would be no war.” ― Leo Tolstoy, War and Peace
User avatar
John Thomas8
Posts: 5255
Joined: Mon Feb 22, 2021 7:42 pm
Location: Central NC
Occupation: Tech Support

Artificial Intelligence (AI) in General

#114

Post by John Thomas8 »

User avatar
John Thomas8
Posts: 5255
Joined: Mon Feb 22, 2021 7:42 pm
Location: Central NC
Occupation: Tech Support

Artificial Intelligence (AI) in General

#115

Post by John Thomas8 »

User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#116

Post by RTH10260 »

:like:


pm. around the 20 minute marker he reports on the success rate of legal answers by AI --> +/- 80% failure (currrently)
User avatar
Estiveo
Posts: 2340
Joined: Mon Feb 22, 2021 9:50 am
Location: Inland valley, Central Coast, CA
Verified:

Artificial Intelligence (AI) in General

#117

Post by Estiveo »

Estiveo_20240327_190409.jpg
Estiveo_20240327_190409.jpg (93.01 KiB) Viewed 119 times
Image Image Image Image
User avatar
RTH10260
Posts: 14796
Joined: Mon Feb 22, 2021 10:16 am
Location: Switzerland, near the Alps
Verified: ✔️ Eurobot

Artificial Intelligence (AI) in General

#118

Post by RTH10260 »

The Guardian dabbling with AI to generate image descriptions

The html IMG ALT parameter to descibe an image when it cannot be displayed.

Who do you think is this:
Older white man, poofy hair, dark suit, red tie, standing outside under an umbrella someone else is holding, with police officers in vaunted caps standing in formation behind him.
► Show Spoiler

as part of link to a further reading in left margin of https://www.theguardian.com/us-news/202 ... ctatorship
User avatar
John Thomas8
Posts: 5255
Joined: Mon Feb 22, 2021 7:42 pm
Location: Central NC
Occupation: Tech Support

Artificial Intelligence (AI) in General

#119

Post by John Thomas8 »

Post Reply

Return to “Computers and Internet”