Another OpenAI safety researcher has left the company. In a post on X, Steven Adler called the global race toward AGI a “very risky gamble.” OpenAI safety researcher Steven Adler announced on Monday he had left OpenAI late last year after four years at the company.
As the U.S. races to be the best in the AI field, one of the researchers at the most prominent company, OpenAI, has quit.
In a series of posts on X, Steven Adler - who has been working on AI safety for four years - described his journey as a "wild ride with lots of chapters".
OpenAI’s Sam Altman downplays impact of DeepSeek
OpenAI thinks DeepSeek may have used its AI outputs inappropriately, highlighting ongoing disputes over copyright, fair use, and training data.
The DeepSeek drama may have been briefly eclipsed by, you know, everything in Washington (which, if you can believe it, got even crazier Wednesday). But rest assured that over in Silicon Valley, there has been nonstop,
OpenAI on Thursday said it’s signed a partnership allowing the U.S. National Laboratories to use its latest line of AI models.
OpenAI is launching today ChatGPT Gov, a new version of its chatbot that US government agencies can self-host on their Azure commercial cloud.
DeepSeek-R1’s Monday release has sent shockwaves through the AI community, disrupting assumptions about what’s required to achieve cutting-edge AI performance. This story focuses on exactly how DeepSeek managed this feat,
OpenAI announced it has uncovered evidence that Chinese artificial intelligence startup DeepSeek allegedly used its proprietary models.
The tech industry's reaction to AI model DeepSeek R1 has been wild. Pat Gelsinger, for instance, is elated and thinks it will make AI better for everyone.
Now that the use of AI is becoming more widespread, it's common that AI companies will try to emulate other AI companies to make them more palatable to their consumers.