Nothing Special   »   [go: up one dir, main page]

Goto

Collaborating Authors

 natural language


Xbox Pushes Ahead With Muse, a New Generative AI Model. Devs Say 'Nobody Will Want This'

WIRED

Microsoft is wading deeper into generative artificial intelligence for gaming with Muse, a new AI model announced today. The model, which was trained on Ninja Theory's multiplayer game Bleeding Edge, can help Xbox game developers build parts of games, Microsoft says. Muse can understand the physics and 3D environment inside a game and generate visuals and reactions to players' movements. Among the various use cases for Muse that Microsoft outlines in its announcement, perhaps the most intriguing involves game preservation. The company says Muse AI can study games from its vast back catalog of classic titles and optimize them for modern hardware.


Mistral's new AI model specializes in Arabic and related languages

ZDNet

Paris-based AI startup Mistral is focusing on providing large language models (LLMs) that understand regional-specific languages and are tailored to grasp the cultural nuances sometimes overlooked in larger, more general-purpose models trained to be versed in multiple languages. Mistral has released its first "specialized" regional language-focused model, Saba. According to Mistral, the 24-billion-parameter model has been trained on "meticulously curated datasets" from across the Middle East and South Asia to meet a growing customer base in Arabic-speaking countries. The startup, co-founded by former Meta employees, is attempting to compete with the likes of ChatGPT and Microsoft Copilot with its own AI chatbot -- Le Chat. Mistral has developed and released several LLMs, both commercial and open source, that are accessible through websites, mobile apps, and APIs for third-party applications. Saba is relatively similar in size to Mistral Small 3, an open-source, general-purpose model comparable to larger models such as Llama 3.3 70B, Qwen 32B, and even GPT4o-mini.


What does the 'e' in iPhone 16e stand for?

ZDNet

On Thursday, Apple unveiled the newest edition to its iPhone lineup, namely the iPhone 16e. Initially, many people assumed Apple would christen it with the name iPhone SE 4. But the new moniker makes greater sense given the phone's more robust features and Apple's obvious desire to position it in the iPhone 16 family. Starting at 600, the iPhone 16e offers a host of enhancements over its SE predecessors. You'll find a newer and refreshed design, a 6.1-inch OLED display, support for the AI-powered Apple Intelligence, an A18 chipset, Face ID, a USB-C connection, and Apple's first in-house modem. On the camera end, the new model sports a 48MP wide camera on the back with an integrated 2x telephoto lens and a 12MP front-facing camera.


When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds

TIME - Tech

Complex games like chess and Go have long been used to test AI models' capabilities. But while IBM's Deep Blue defeated reigning world chess champion Garry Kasparov in the 1990s by playing by the rules, today's advanced AI models like OpenAI's o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they don't always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game. That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19, which evaluated seven state-of-the-art AI models for their propensity to hack. While slightly older AI models like OpenAI's GPT-4o and Anthropic's Claude Sonnet 3.5 needed to be prompted by researchers to attempt such tricks, o1-preview and DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.


Yikes: Jailbroken Grok 3 can be made to say and reveal just about anything

ZDNet

Just a day after its release, xAI's latest model, Grok 3, was jailbroken, and the results aren't pretty. On Tuesday, Adversa AI, a security and AI safety firm that regularly red-teams AI models, released a report detailing its success in getting the Grok 3 Reasoning beta to share information it shouldn't. Using three methods -- linguistic, adversarial, and programming -- the team got the model to reveal its system prompt, provide instructions for making a bomb, and offer gruesome methods for disposing of a body, among several other responses AI models are trained not to give. Also: If Musk wants AI for the world, why not open-source all the Grok models? During the announcement of the new model, xAI CEO Elon Musk claimed it was "an order of magnitude more capable than Grok 2." Adversa concurs in its report that the level of detail in Grok 3's answers is "unlike in any previous reasoning model" -- which, in this context, is rather concerning. "While no AI system is impervious to adversarial manipulation, this test demonstrates very weak safety and security measures applied to Grok 3," the report states.


Apple unveils new budget iPhone 16e to replace SE model - and it comes with plenty of surprise new features

Daily Mail - Science & tech

Apple has finally pulled back the curtain on its latest'budget' smartphone โ€“ the iPhone 16e. Released February 28, the device runs Apple Intelligence features, including a ChatGPT integration with smart assistant Siri. It also has a 6.1-inch display, a two-in-one camera system, an'extraordinary' battery life, and sees the return of the'notch' at the top of the display. 'iPhone 16e packs in the features our users love about the iPhone 16 lineup, including breakthrough battery life, fast performance powered by the latest-generation A18 chip, an innovative 2-in-1 camera system, and Apple Intelligence,' said Kaiann Drance, Apple's vice president of worldwide iPhone product marketing. 'We're so excited for iPhone 16e to complete the lineup as a powerful, more affordable option to bring the iPhone experience to even more people.'


How we test AI at ZDNET in 2025

ZDNet

The launch of ChatGPT in November 2022 unleashed a new era of AI with the technology soaring in popularity. As a result, many competitors entered the market, developing large language models (LLMs), chatbots, image generators, and more. Fast forward to 2025 and nearly every major tech company is launching AI products. The technology is also increasingly integrated into hardware, with AI features built into most smartphones, laptops, and tablets. As AI becomes ubiquitous, it is important to remember LLMs are nascent technologies.


Can Google's new research assistant AI give scientists 'superpowers'?

New Scientist

Google's AI "co-scientist" is based on the firm's Gemini large language models Google has unveiled an experimental artificial intelligence system that "uses advanced reasoning to help scientists synthesize vast amounts of literature, generate novel hypotheses, and suggest detailed research plans", according to its press release. "The idea with [the] 'AI co-scientist' is to give scientists superpowers," says Alan Karthikesalingam at Google. The tool, which doesn't have an official name yet, builds on Google's Gemini large language models. When a researcher asks a question or specifies a goal โ€“ to find a new drug, say โ€“ the tool comes up with initial ideas within 15 minutes. Several Gemini agents then "debate" these hypotheses with each other, ranking them and improving them over the following hours and days, says Vivek Natarajan at Google. During this process, the agents can search the scientific literature, access databases and use tools such as Google's AlphaFold system for predicting the structure of proteins.


The Download: selling via AI, and Congress testing tech

MIT Technology Review

Imagine you run a meal prep company that teaches people how to make simple and delicious food. When someone asks ChatGPT for a recommendation for meal prep companies, yours is described as complicated and confusing. Because the AI saw that in one of your ads there were chopped chives on the top of a bowl of food, and it determined that nobody is going to want to spend time chopping up chives. It may seem odd for companies or brands to be mindful of what an AI "thinks" in this way but it's already becoming relevant as consumers increasingly use AI to make purchase recommendations. The end results may be a supercharged version of search engine optimization (SEO) where making sure that you're positively perceived by a large language model might become one of the most important things a brand can do.


What is Perplexity Deep Research, and how do you use it?

ZDNet

Besides being better than Google for search, Perplexity, the artificial intelligence (AI) company, wants to be an expert on any subject with its new Deep Research feature. This cutting-edge tool, launched by Perplexity AI in February 2025, combines autonomous reasoning with rapid processing to deliver exhaustive reports on specialized topics. According to Perplexity, "When you ask a Deep Research question, Perplexity performs dozens of searches, reads hundreds of sources, and reasons through the material to autonomously deliver a comprehensive report." The company claims at its core that Perplexity Deep Research employs a proprietary framework called test time compute (TTC) expansion, which enables the systematic exploration of complex topics. Unlike conventional search engines that retrieve static results, the TTC architecture mimics human cognitive processes by iteratively refining its understanding through analysis cycles.