The AI Act categorizes AI systems based on risk levels: unacceptable risk systems are banned, high-risk systems are regulated, and limited risk systems face lesser transparency requirements. Developers of high-risk AI must adhere to a comprehensive set of obligations, while providers of general-purpose AI (GPAI) must ensure proper documentation and risk assessments, especially if their models are deemed systemic. Key provisions include prohibitions on manipulative AI techniques and requirements for the ethical deployment of AI in sensitive areas such as law enforcement, healthcare, and employment.