🌐 AI & Data Digest: Edition 4 🌐

🌐 AI & Data Digest: Edition 4 🌐

Hi everyone and welcome back to the 4th edition of the AI & Data Digest. One month in and looking back over the last 4-weeks alone, it is scary to think just how many AI advancements we have seen in that time! 

So, let’s get straight to your essential roundup of this week's top news, insights, and opportunities in AI, data, and digital transformation.

📅 Welcome to Your Weekly Intelligence Brief!

It’s been a big week for all things AI & Data. Let’s start with our 10 BIG weekly news beats you might have missed! 

📰 Top 10 News Beats This Week:

𝗦𝗲𝗰𝗼𝗻𝗱 𝗚𝗹𝗼𝗯𝗮𝗹 𝗔𝗜 𝗦𝗮𝗳𝗲𝘁𝘆 𝗦𝘂𝗺𝗺𝗶𝘁 – AI leaders pledge safety and transparency in Seoul. 🌐🤝

𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗜𝘀 𝗮𝗻 𝗘𝗻𝗲𝗿𝗴𝘆 𝗛𝗼𝗴 – Hugging Face's Sasha Luccioni highlights AI's massive energy use. ⚡💡

𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗠𝗮𝗽𝘀 𝗔𝗜 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 – Breakthrough in AI interpretability with Claude Sonnet. 🧠🔍

𝗢𝗽𝗲𝗻𝗔𝗜'𝘀 𝗛𝗼𝗹𝗹𝘆𝘄𝗼𝗼𝗱 𝗩𝗼𝗶𝗰𝗲 𝗖𝗼𝗻𝘁𝗿𝗼𝘃𝗲𝗿𝘀𝘆 – Scarlett Johansson disputes AI voice likeness. 🎭⚖️

𝗚𝗼𝗼𝗴𝗹𝗲'𝘀 𝗔𝗜 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀 – Erroneous AI search results fuel scepticism. ❌🔍

𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲 𝗟𝗮𝘂𝗻𝗰𝗵𝗲𝘀 𝗭𝗲𝗿𝗼𝗚𝗣𝗨 – $10M in GPU compute power for indie AI devs. 🖥️💸

𝗥𝗲𝗰𝗮𝗹𝗹 𝗜𝗻𝘃𝗲𝘀𝘁𝗶𝗴𝗮𝘁𝗲𝗱 𝗯𝘆 𝗨𝗞'𝘀 𝗜𝗖𝗢 – Microsoft under scrutiny for privacy concerns with new Recall feature. 🔍🛡️

𝗚𝗲𝗼𝗳𝗳𝗿𝗲𝘆 𝗛𝗶𝗻𝘁𝗼𝗻 𝗔𝗱𝘃𝗼𝗰𝗮𝘁𝗲𝘀 𝗨𝗕𝗜 – AI pioneer suggests universal basic income to combat AI-induced job losses. 💸🤖

𝗦𝗻𝗼𝘄𝗳𝗹𝗮𝗸𝗲 𝗕𝘂𝘆𝘀 𝗧𝗿𝘂𝗘𝗿𝗮 – Acquires AI observability firm, stock surges. 📈🤝

𝗔𝗺𝗮𝘇𝗼𝗻’𝘀 𝗔𝗜 𝗨𝗽𝗴𝗿𝗮𝗱𝗲 𝗳𝗼𝗿 𝗔𝗹𝗲𝘅𝗮 – Introducing a more conversational AI experience with a new subscription fee. 🗣️💡

🧠 My Insights:

It was quite a full-on week for me! Up and down the country both personally and professionally. Whilst I had the fortune to present at the Generative AI Summit in London, where I presented on 12 Months of Enterprise Generative AI: Lessons Learned From The Front Line.

I’ve posted a snapshot of my big predictions for Gen-AI in the next few years... what do you think?

I am also starting to spend my spare time exploring data ownership and what it means for consumers when we sign-up for catch-all terms and conditions to utilise free to use social media services and everything in between. I penned some thoughts on this matter earlier in the week where I discussed The Future of Consent Management In The AI Age.

They also say that 3 is the magic number. 

To that effect, I also took the time to jot a few thoughts on the increasingly important LLMOps movement and its necessity for organisations embarking on a generative AI transformation … Especially for those organisations that are focussed on fine tuning their own LLM!  You can read those thoughts here: To Fine Tune, or Not to Fine Tune, That is the Question - How LLMOps Can Help

📔 Guest Blog:

Much like the DevOps movement, the use of open source software components and models are very much key to building distributed AI systems. To such an extent, security validation and verification of where your software or base model components originate from is incredibly important. After all, you never know what has been embedded in a software module and this is just the same for an ML/AI model. To that point, I was lucky enough to come across a blog by Thomas Wolf, Co-Founder & CSO at Hugging Face who recently wrote about some of the controls they have embedded in their hub to protect against serialisation methods, malware scanning, and more. You can read Thomas’s summary on LinkedIn. Equally, you can read the full breakdown by his colleague Omar Sanseviero on the Hub Security page over at Hugging Face.

For anybody who is unfamiliar with Hugging Face, in short they create tools to make it easier for people to use AI and machine learning. They are best known for their online platform where you can find and use many pre-built AI models for tasks like translating languages, understanding text, and generating images. Though, there is a much vaster array of models for consumption. If you are more familiar with software delivery, think of Hugging Face as being like GitHub. But instead of software and code, they store and make advanced AI technologies accessible to everyone, even those without technical expertise.

Thanks to Thomas for letting me include the blog this week. 

🚀 AI Success Stories:

Check out these three stories that highlight how AI is being deployed across sectors to solve complex problems and open up new opportunities for efficiencies across the world. This week, we have included one case study from each of the cloud hyper scalers as well as an AI for good case study.

👉How Amazon is harnessing solar energy, batteries, and AI to help decarbonize the grid

👉Wendy's Taps Google Cloud to Revolutionise the Drive-Thru Experience with AI

👉Swift Uses AI to Fight Fraud

👉 ‘Saving lives’: How AI is helping doctors better predict heart failure

📅 Upcoming AI Events in London This May:

📅 AI World Congress -  30-31 May 2024, Kensington Events and Conference Centre

📅 AI Product Meetup -  30 May 2024, 110 Southwark Street, London

📅 Moving Target: AI Risk Management Fundamentals -  25 Copthall Avenue London EC2R 7BP

🔍 Research & Reports:

This week's featured reports include a piece by Snowflake which covers several popular generative AI use cases as well as how to get started with generative AI and put it into production with private company data. Secondly, I’ve added a paper that was drafted by Geoffrey Hinton and Drew Van Camp where they explain ways to make neural networks (a type of AI) work better by keeping them simple.

In the paper, they suggest adding a bit of randomness to the network's connections and adjusting it to balance accuracy and simplicity. This approach helps the network learn better from examples without getting too complicated. The paper is pretty detailed and it took me a few reads to get my head a little bit around it. 

👉Generative AI in Practice: Exploring Use Cases to Harness Enterprise Data

👉Keeping Neural Networks Simple by Minimising the Description Length of the Weights 

⚖️ Regulatory & Ethics Watch:

👉 Global AI Companies Commit to Safety Standards

At the AI Seoul Summit, 16 major AI companies from around the world, including Amazon, Google DeepMind, and Microsoft, agreed to new safety commitments. Key outputs of the event include the establishment of "Frontier AI Safety Commitments" which require companies to:

1. Publish safety frameworks to measure and mitigate risks of their AI models.

2. Commit to halting development or deployment of models if risks cannot be controlled.

3. Ensure transparent governance and public reporting on AI safety practices.

The next steps involve refining these safety frameworks with input from trusted actors and preparing for the AI Action Summit in France in 2025. These measures aim to set a global standard for AI safety and ensure responsible AI development. 

For more details on the AI safety commitments agreed in Seoul visit this this link at Gov.uk 🇰🇷🇬🇧

Photograph: Anthony Wallace/AFP/Getty Images

👉 The EU AI Act Has Been Approved 

In other landmark AI news, the EU AI Act, has now been approved for deployment and is set to take effect next month. The act will establish a comprehensive set of regulations for AI, surpassing the voluntary compliance approach of the US and China's control-focused model. The EU AI Act mandates strict transparency for high-risk AI systems and limits real-time biometric surveillance to severe crime prevention. It bans social scoring, predictive policing, and unauthorised facial image scraping. Non-EU companies using EU data must comply, setting a potential global standard akin to the GDPR. Fines for violations will range significantly, depending on the infraction type. 

On the one hand, this is a positive step to take. Though, I question whether EU members have shot themselves in the foot by tying AI adoption across the region up in red-tape before it really gets off the ground! This is also very much applicable for any business that processes data of EU Member citizens. So in essence, the EU AI Act is pretty much a global AI policy by stealth. 

Anyhow, if you are interested in how relevant and fit for purposes your organisations IT controls are in order to comply with the EU AI Act, then feel free to use my OpenAI GPT EU AI Act - ControlSync. Using the GPT you can perform a short SWOT & Gap analysis of your organisations IT controls and counterbalance these against best practices set out by ISO and NIST. YOU ARE WELCOME!

https://chatgpt.com/g/g-zGBiIiJGO-eu-ai-act-controlsync

🚀 Careers & Opportunities in AI & Data:

Head over to Otta to take a look at these 5 fantastic data, ML & AI opportunities in the UK:

Job Highlight #1: Director of Applied Data Science: Mastercard 💳 - Apply Here.

Job Highlight #2: Head of Data & AI: Thredd 🏦 - Apply Here

Job Highlight #3: Data Science Associate Director: Publicis Groupe 🧑💻 - Apply Here

Job Highlight #4: Vice President of Data Analytics: Gentrack 🪫- Apply Here

Job Highlight #5: Data Science Manager: Monzo 💸 - Apply Here

🚀 This Weeks Survey:

And our survey says…. I am not one to keep flogging a dead horse! After the last few attempts, I haven’t really seen a good take-up on the survey section of the AI & Data Digest. For now, we will put this on the shelf and look to include something else next week! Bon voyage to our survey section…for now 👀

Image created with DALL.E 2

🔗 Connect With Me!

Engage with me further on these topics and more by connecting on LinkedIn and scheduling a discussion via Calendly

👍 Like & Share

Enjoyed this newsletter? Hit like and share it within your network to help others stay on top of the latest in AI and data! Also, if you have any feedback on other bits to include, then let me know!

Thanks for reading!

Ben @ The AI & Data Digest Team

www.webuild-ai.com

Graeme Riley

Chief Information Officer at Dunedin City Council

1y

Great work again Ben - such a well organised and comprehensive read to get ready for the week ahead. Well done!

To view or add a comment, sign in

Others also viewed

Explore topics