New AI Guidelines That Every CEO Needs To Know

Many companies nowadays use AI in their operations. As a CEO of such a company, it’s crucial that you know all the guidelines for using AI and ensure they are being followed. Artificial Intelligence has seen remarkable development in the past few years. With the introduction of ChatGPT, Gemini, Davin, etc., and other AI models, companies have been using them on a mass scale, thus improving the efficiency of their operations by automating repeatable tasks. However, as per the new Act passed by the European Union, the usage of AI has to be restricted, and certain applications of AI have been banned.  

The rationale behind imposing strict restrictions is that AI was developed to serve humans, not otherwise, and it should solely be focused on improving human well-being. 

Use cases That Are Banned

As we have discussed, certain uses of AI have been banned to promote the overall well-being of humans; lawmakers classify these AI applications as “unacceptable.” However, these restrictions do have exemptions, which we will discuss below.

  • Using artificial intelligence to manipulate someone’s behavior in a derogatory manner has been prohibited.
  • Artificial Intelligence can be used for biometric classifications to infer political and religious beliefs, sexual preferences, or orientation, but it has been banned. 
  • A Social Scoring system that results in discrimination has also been banned.
  • AI tools that can remotely detect a person’s identity using his or her biometrics, such as physical appearance, behavior, or other biological characteristics, are not allowed to be used any further.

Failure to follow any of the above guidelines will result in a fine of 30 million euros or 6% of the company’s global turnover (whichever is higher).

However, these restrictions still do not clearly objectify the usage of Artificial intelligence; whether the guidelines have been followed is left at the interpreter’s discretion. For example, will running ads using AI for fast food be counted as manipulation or influence in harmful ways? Or how do we judge whether a social scoring system will lead to discrimination in a world where we’re used to being credit-checked and scored by many government and private bodies?


As we have discussed earlier, there are a few exemptions for the above rules and regulations:

  • The government can use these “unacceptable” AIs. 
  • Law enforcement bodies can use these AIs to prevent terrorism, find missing people, and other such conditions. 
  • It can also be used for certain scientific research and developments.

Classifications Of AI Tools

The lawmakers have divided the tools into three categories based on their influence on human life: High risk, Limited risk, and Minimal risk. 

  • High Risk: AI with use cases such as self-driving cars and medical applications falls into this category. Businesses that operate using these AI will have to face stricter rules and regulations.
  •  Limited risk and Minimal risk: These are the AI models that are being used for entertainment purposes only, such as gaming, text generation, videos, etc. These companies will face far fewer restrictions but must maintain a certain level of transparency with their AI usage. 

Way Forward

These rules and regulations indicate that lawmakers have started taking AI more seriously and taking action to regulate it. Now, it has become more crucial for business leaders to keep themselves updated with guidelines and be prepared for upcoming changes.

Leave a Reply