Google Gemini faked?👀

Mistral AI raising $400m and Apple releases AI Framework

DataWake

DataWake

Good day fellow data aficionados.

How are you today, this greaaat Monday? I hope you’re all good. Here we got even more snow! 😢 

Here's today's zesty agenda:

  • Google Gemini faking?! 👀

  • Mistral AI to raise $400m 💰

  • Google introduces AlphaCode

  • Apple releases AI framework MLX

Google faking Gemini 👀

(Source)

In the aftermath of Google's much-anticipated debut of the Gemini AI model, the tech community finds itself grappling with mixed emotions. While the unveiling garnered attention with a video titled "Hands-on with Gemini: Interacting with multimodal AI," boasting a million views, it turns out the most impressive demos were, in fact, a fabrication. The video showcased seemingly seamless interactions, from evolving duck sketches to tracking a ball in a cup-switching game, all purportedly demonstrating Gemini's prowess.

However, the revelation that the video was staged, using carefully tuned text prompts and still images, has left users questioning the authenticity of Google's technological claims. While Google insists the video showcases real outputs from Gemini, the discrepancy between the staged interactions and the actual capabilities raises concerns about the transparency and integrity of the demonstration. The tech community now grapples with the blurred lines between aspirational representation and factual demonstration in AI videos, prompting a reevaluation of expectations from such unveilings.

What do you think? Was Google “faking” this or is it OK? Is Google loosing the AI game? 😮 

Mistral to raise $400m 📈

(Source)

Mistral, who are releasing their models as torrents on X. Big lads

French start-up Mistral AI has achieved unicorn status, swiftly emerging as a major player in the AI sector after a significant funding round that raised €385 million. Valued at around $2 billion, Mistral attracted support from notable names such as Andreessen Horowitz (A16z), Nvidia, and Salesforce in this funding surge, marking an impressive ascent since its May 2023 launch.

Founded by former researchers from Google’s Deepmind and Meta, Mistral focuses on developing open-source large language models, positioning itself as a counterforce to AI market players like OpenAI and Google. The company recently launched beta access to its platform services and shared a substantial torrent of an open-source AI model on X. A16z highlighted Mistral's pivotal role in fostering a passionate developer community around open-source AI, emphasizing its potential to lead the way in achieving robust, widely adopted, and trusted AI systems.

Google unveils AlphaCode 📜

(Source)

Google DeepMind has shaken up the world of competitive programming with the introduction of AlphaCode 2, a cutting-edge Artificial Intelligence system that builds upon the swift and accurate AlphaCode. Leveraging the potent Gemini model from Google’s Gemini Team, AlphaCode 2 stands out with its advanced architecture based on Large Language Models (LLMs) and a specialized search and reranking system tailored for competitive programming challenges.

With a meticulous process involving the Gemini Pro model, fine-tuning through the GOLD training target, and a deliberate sampling strategy generating up to a million code samples per challenge, AlphaCode 2 exhibited remarkable prowess on the Codeforces platform. Answering 43% of issues in just ten attempts, it outperformed its predecessor and now resides in the 85th percentile, showcasing the potential of AI to tackle intricate problems in programming. This technological leap opens doors for collaboration between humans and AI programmers, pushing the boundaries of what was once deemed beyond AI capabilities in competitive programming. Cheers to AlphaCode 2 for raising the bar and showing us the exciting future of AI-assisted problem-solving in coding competitions!

Apple releases MLX

(Source)

MLX, APX, XXX?

A fusion of elegance and efficiency…

Sounds like whiskey on the rocks tbh

MLX is Apple's tailored response to simplify the intricate art of training and deploying ML models on their silicon. With a user-friendly design inspired by Jax, PyTorch, and ArrayFire, MLX flaunts Python and C++ APIs, high-level packages like mlx.optimizers and mlx.nn, and a touch of lazy computing magic. Picture this: arrays materialize only when needed, computations are dynamic, and operations flow seamlessly between CPUs and GPUs, all thanks to shared memory wizardry.

But MLX isn't just a framework; it's a gateway to democratizing machine learning. Apple's GitHub announcement emphasizes simplicity and efficiency, inviting researchers to extend and enhance MLX for swift exploration of new ideas. From transformer language models to speech recognition, MLX showcases its prowess with examples like Stable Diffusion's 40% better throughput than PyTorch. Apple, though fashionably late to the AI party, aims to simplify ML model building, potentially bringing generative AI magic to Apple devices.

You think Apple is going into the GPU (TPU?) race soon 😁 

Dank Memes

That's all for today folks! If you want more news or just some dank memes, make sure to follow the newsletter and on X!