Ethereum

Apple launches eight small AI language models to compete with Microsoft Phi-3

Confirming its strength in numbers, Apple launched eight small AI models, making a strategic move in the highly competitive artificial intelligence market. Collectively known as OpenELM, these compact tools are designed to run offline on the device, making them ideal for smartphones.

Posted in Open Source AI Community hugging face, the model is available in 270 million, 450 million, 1.1 billion, and 3 billion parameter versions. Users can also download Apple’s OpenELM in a pre-trained or training-tuned version.

Pre-trained models provide a foundation for users to fine-tune and develop. Guided coordination models are already programmed to respond to instructions, making them better suited to dialogue and interaction with end users.

Apple hasn’t proposed a specific use case for these models, but they could be applied to parse emails and texts or run assistants that can provide intelligent suggestions based on data. This is a similar approach to: Taken from GoogleWe deployed the Gemini AI model to our Pixel smartphone lineup.

The model was trained on publicly available datasets, and Apple is sharing both the code for CoreNet (the library used to train OpenELM) and the “recipe” for the model. That means users can see how Apple built it.

The Apple release is coming soon. Microsoft announced Phi-3., a small family of language models that can be run locally. A 3.8 billion parameter model trained with 3.3 trillion tokens, Phi-3 Mini can still handle 128K context tokens, comparable to GPT-4 and outperforming Llama-3 and Mistral Large in terms of token capacity. .

Open source and lightweight, the Phi-3 Mini could potentially replace existing assistants like Apple’s Siri or Google’s Gemini for some tasks, and Microsoft has already tested Phi-3 on the iPhone and reported satisfactory results and fast token generation. Reported.

Apple has not yet integrated these new AI language model features into consumer devices, but the upcoming iOS 18 update will. rumored Includes new AI features that use on-device processing to protect user privacy.

Apple hardware benefits from using local AI because it combines device RAM and GPU video RAM (or VRAM). This means that a Mac with 32GB of RAM (a typical configuration for a PC) can leverage that RAM as if it were using GPU VRAM to run AI models. By comparison, Windows devices weak With separate device RAM and GPU VRAM. Users often need to purchase a powerful 32GB GPU to increase RAM to run AI models.

However, Apple lags behind Windows/Linux in the field of AI development. Most AI applications revolve around hardware designed and built by Nvidia, which Apple has phased out in favor of supporting its own chips. This means that there is relatively little AI development on Apple, and as a result, using AI in Apple products requires a translation layer or other complex procedures.

Stay up to date with cryptocurrency news and receive daily updates in your inbox.

Related Articles

Back to top button