AI Under the Guardrails of Regulations


AI Tech Circle

Hey Reader!

I worked the entire week to continue our earlier discussion on building the business-specific LLMs using RAG and covering its technical aspects. However, I got the attention of regulation from the EU, which has stringent guidelines. So I thought, let me cover this, as whoever is working in the AI field he/she needs to understand these regulations.

So let’s go deeper today into AI safety and regulations and how AI is being put behind bars to ensure that it is not going out of control by the humans, who are the ultimate custodians of what will come from the AI.

Significant developments in AI/ML have occurred since November 2022, when ChatGPT was released. This led us to put much effort into managing AI safety, transparency, and bias in building the model.

This also brings attention to the government regulators and even the heads of state.

How can we ensure that AI is safer for humans?

Then another debate started about the ‘extinction of humans due to AI,’ and media outlets began purring text to the web pages and as news stories about this ‘threat’;

an example of an article published by Time: ‘An AI pause is humanity’s best bet for preventing Extinction’ to Forbes publishing ‘Will ChatGPT Lead To Extinction Or Elevation Of Humanity? A Chilling Answer’.

Let’s cover the regulations issued so far and the key points. In AI, we all need to know about the stringent laws that governments are passing.

EU AI Act: The use of artificial intelligence in the EU will be regulated by the AI Act; this Wednesday, the EU Parliament approved the Artificial Intelligence Act, which ensures safety and compliance with fundamental rights. The law has taken a “risk-based approach” to products or services that use artificial intelligence.

Below are a few of the critical areas, and there are also law enforcement exceptions; you can read them in detail from the above link.

Prohibited AI Uses:

Under the latest regulations, specific AI practices jeopardizing individuals’ rights are prohibited.

  • These include using biometric categorization systems that identify sensitive traits and the indiscriminate collection of facial images from online sources or surveillance footage for facial recognition databases.
  • Emotion detection technology in work and educational environments, social scoring systems, predictive policing based purely on personal profiling, and AI designed to influence human actions or take advantage of vulnerabilities are all banned.

High-Risk AI Systems:

The AI Act introduces a detailed framework for identifying high-risk systems. This category encompasses systems crucial to safety or designed for use in essential services, employment sectors, law enforcement, or within judicial and democratic frameworks.

  • These AI systems are required to evaluate and mitigate risks, keep detailed usage records, ensure transparency and accuracy in their operations, and guarantee human supervision.
  • Individuals will have the right to lodge complaints about AI systems and obtain clear explanations regarding high-risk AI-driven decisions that impact their rights.

Transparency Obligations:

Systems utilizing General-purpose AI (GPAI) and the underlying GPAI models are subject to specific transparency obligations. These include adhering to EU copyright laws and providing comprehensive summaries of the training data used. There are heightened obligations for the more potent GPAI models that carry potential systemic risks. These involve conducting evaluations of the models, identifying and addressing any systemic risks, and documenting any incidents.

Furthermore, any content that has been artificially created or altered, known as “deepfakes” (including images, audio, or video), must be explicitly identified as such.

What About Generative AI?

Adjustments were made to include stipulations for generative AI models. These models power chatbot systems capable of generating novel, realistic responses, images, and other outputs.

Creators of general-purpose AI models, ranging from European startups to industry giants like OpenAI and Google, are now required to outline comprehensively the text, images, videos, and additional internet-sourced data utilized in training these systems. They must also comply with EU copyright laws.

Moreover, any AI-created deep fake imagery, videos, or audio depicting people, places, or events in a manner that has been artificially altered must be clearly marked as artificially manipulated.

Entry into Force:

  • 6 months bans on prohibited practices – AI applications posing an unacceptable risk
  • 9 months for ‘codes of practice’ to be established by the regulators
  • 12 months for general-purpose AI models – not classified as high-risk
  • 36 months, obligations for high-risk AI systems enforced.

For more details, you can read the link above.

Other noteworthy work on AI regulations:

Weekly News & Updates…

This week’s AI breakthroughs mark another leap forward in the tech revolution.

  1. Cohere has released the C4AI Command-R, a 35 billion parameter model weight.
  2. Devin is the first AI software engineer and the new state-of-the-art on the SWE-Bench coding benchmark. Devin is an autonomous agent that solves engineering tasks using its own shell, code editor, and web browser.
  3. Claude 3 Haiku is three times faster than its peers, enabling enterprises to analyze large volumes of documents, such as quarterly filings, contracts, or legal cases, quickly.
  4. LlamaParse – Parsing Financial Powerpoints In this cookbook, how to use LlamaParse to parse a financial PowerPoint.
  5. Review completed & Altman, Brockman to continue to lead OpenAI.

The Cloud: the backbone of the AI revolution

Favorite Tip Of The Week:

Here’s my favorite resource of the week.

Potential of AI

Things to Know

  • Phoenix. It’s the most advanced replica and text-to-video model available via end-to-end APIs.

The Opportunity…

Podcast:

  • This week’s Open Tech Talks episode 130 is “Digital Safeguards: Unlocking Cybersecurity Basics with Nick Lorizio. Founder, AstuteTechnologists

Apple | Spotify | Google Podcast

Courses to attend:

Events:

Tech and Tools…

  • Bland AI: Add voice AI to your website, mobile apps, phone calls, video games, & even your Apple Vision pro
  • Pika: add the functionality to generate and integrate sound into your videos. Either prompt the sound you want or let Pika automatically generate it based on the content of your video.
  • MLX: This Python library is the easiest way to begin building on top of Apple’s machine learning library

Data Sets…

  • MMCSG (Multi-Modal Conversations in Smart Glasses) from Meta. It comprises two-sided conversations recorded using Aria glasses, featuring multi-modal data such as multi-channel audio, video, accelerometer, and gyroscope measurements. This dataset is suitable for research in automatic speech recognition and activity detection.
  • OpenML: It is an open platform for sharing datasets, algorithms, and experiments
  • Datahub: Several datasets organized by different topics

Other Technology News

Want to stay on the cutting edge?

Here’s what else is happening in Information Technology you should know about:

  • Apple Bought an AI Startup: What to Know About Its AI Plans as reported by the CNET
  • Under Armour’s AI-powered commercial stirs debate on creative accreditation, as reported by Marketing Interactive.

Earlier Edition of a newsletter

That’s it!

As always, thanks for reading.

Hit reply and let me know what you found most helpful this week – I’d love to hear from you!

Until next week,

Kashif Manzoor

The opinions expressed here are solely my conjecture based on experience, practice, and observation. They do not represent the thoughts, intentions, plans, or strategies of my current or previous employers or their clients/customers. The objective of this newsletter is to share and learn with the community.