Skip links

AI Risk and Regulation- EU Commission’s New Framework

With risk comes regulation.

Advancement in the development and deployment of AI-based tools in medicine (and every other area of life) is more rapid than ever. Thus, we must consider the possible risks associated with such systems– questions about regulations are becoming unavoidable. As AI-powered systems and machines are setting a new pace and bringing a new standard to fact-based decision-making processes, it is time to put modalities in place that identify what may go wrong and that determine how best to tackle such issues.

The EU weighs in.

In its attempt to carve out a human-centric approach to all AI processes, the European Parliament stated its intention to update the EU’s existing framework of appropriate ethical principles. The plan is expected to simultaneously weigh European values and users’ needs.

The first EU guidelines geared toward this aim were published in April of 2019. Ursula von der Leyen (President-elect, European Commission) then announced that the Commission would put forward a further legislative proposal to achieve a more coordinated European approach to the ethical implications of AI. Basically, they recommended a protocol for the designing, deploying, and use of AI and AI-based services within the EU.

Why the need for regulation?

Although artificial intelligence offers many benefits, apprehensions have been raised around its ethics, legality, and economic implications. Some critics are even worried about fundamental human rights. 

For instance, AI may pose a serious risk to users’ rights to privacy and personal data protection. It can also possibly increase discrimination levels when algorithms or systems are trained using biased datasets. Other common fears include the destruction of jobs in the labor market, the spreading of disinformation, and the creation of autonomous weapons.

By implementing their human-centric set of rules, the EU will influence policymakers across the globe as they build their respective plans to efficiently combat such risks. After all, the EU is considered the front-runner in establishing a comprehensive ethical framework for artificial intelligence.

In the EU, this conversation has been going on for years.

Back in January of 2017, the European Parliament instructed the European Commission to assess the impact of AI and make wide-ranging recommendations for the civil laws governing robotics. Not only was a code of ethics created for robotics engineers, but it led the Commission to establish a group centered on robotics and AI. 

This High-Level Expert Group on AI was tasked with laying out the non-binding Ethics Guidelines for Trustworthy Artificial Intelligence. Overall, 52 independent experts worked on how to effectively secure the development of ethical AI systems in the EU. 

Their key requirements for achieving this are:

a. Human, social, and environmental wellbeing:

This principle states that all AI systems should be used for the beneficial outcomes of individuals only. The product or application must assist in providing solutions to the challenges at hand. To go further, all systems and objectives must be clearly defined by the proper authorities.

b. Fairness:

AI systems should be inclusive and accessible. Their use should not result in unfair discrimination against individuals, communities, or groups. Preferential treatment due to age, sex, race, gender identity, or sexual orientation will not be tolerated.

An Example of a banner with a button

Get the book for free

c. Privacy protection and security:

All AI systems must respect and uphold privacy rights and maintain data security. Unfortunately, this is currently one of the most violated ethical principles when it comes to AI. This principle is intended to make sure that robust security measures are always put in place.

d. Reliability and safety:

AI systems should operate efficiently towards their intended purposes. This means that they must possess the qualities of reliability and accuracy, and that their results must be reproducible. The use of AI systems should also not pose an unreasonable safety risk to any users. Proportionate measures must be adopted in order to minimize risks. 

e. Transparency and understanding:

AI-based systems should possess a high level of transparency. People ought to be aware of when they are being engaged or significantly impacted by a mechanism. Also, all disclosures about AI systems should be provided on time and with reasonable justification. This will include adequate information that assists subjects in their understanding of the outcomes used in decision-making. 

f. Contestability:

Users should be able to challenge the impacts, use, and outputs of any AI system they come in contact with. An accessible avenue for objection must be provided for users. This is crucial when a system affects a person, community, group, or environment. 

g. Accountability:

All individuals responsible for different phases of an AI system’s lifecycle must be identifiable and accountable for the outcomes of their machines, and human oversight should be highly enabled. Please note that the application of legal principles regarding the accountability of AI systems is still under development. Even so, answerability should be considered a top priority.


Complicated right? Regulating advancing technology will always be a massive work in progress. What do you think of this approach? Do you think AI should be regulated at all? 

Contact us or leave a comment if you have questions or ideas about the EU’s regulation of AI-based systems.


Leadership Goes Beyond the Pecking Order

Good leaders get people to follow them. As a family medicine resident, much of my job revolves around educating my patients and fostering healthy habits and prevention.

To be an effective doctor, I must find a way to get my patients to listen to me and follow my instructions. I need their trust. Communication skills, accountability, and emotional intelligence go a long way toward that, and those are all traits of good leaders.

All other medical specialties have similar elements that benefit from those attributes. For instance, even if they don’t often see patients face-to-face, pathologists are often crucial members of diverse medical teams.

Obviously, learning about leadership is a must. But how? Besides turning to Amazon for as many audiobooks on the subject I can listen to on the way to and from work, I’ve found the best way to learn is from mentors in the profession. And don’t just take my word for it. Even Tony Robbins agrees!

I met one of my most influential attending physicians as a student. He was a brilliant internist who I swear would have been a famous stand-up comedian had he not entered medicine. Nobody was safe from his wit, which he used both as an icebreaker with patients and as a tool to bring his team together. During that particular rotation, I never felt like the lowly student who occupied the lowest rung of the proverbial ladder.

My attending didn’t see students in front of him. He saw future doctors. Future leaders. And he treated us as such. As a result, even though everyone thought they had all worked their hardest before, we all managed to squeeze out a little bit more under him.

In short, he was one of the best leaders I have ever worked with.

The Future = More Classes?! Yup.

Since medicine is constantly evolving, so must its curriculum. The goal of medical school has always been to create good doctors. Since the best ones are accomplished leaders as well, it only makes sense that medical schools should teach leadership skills.

For many, being a resident physician is their first job. Think about that for a minute. Their (albeit expansive) training is relatively narrow in its scope; it’s based on countless hours of study, clinical practice, and test taking. Then it’s complete. They graduate and are immediately expected to be leaders. Who have they learned from though? Often it’s people like them: fledgling residents with no formal instruction in mentorship.

What’s the solution? Well, it’s complicated. Leadership courses on their own might not be effective for everyone. However, it should not be something that we can expect to be picked up after a semester of intense studying. But introducing leadership as a concept all physicians will encounter is vital. It should always be there in the background as something we’re working on via multiple avenues. We aren’t learning to interpret EKGs or lab values here.

Leadership is a skill that requires careful cultivation and attentive nurturing. Sure, adding yet ANOTHER subject to the laundry list of courses that defines medical school is daunting. However, it’s necessary – doing so will create a new, more well-rounded and more influential generation of young doctors.

The Key Three (Takeaways)

1. Whether they’re prepared or not and whether they like it or not, physicians are leaders.

2. The better the leader, the better the doctor.

3. Leadership training should be integrated into the medical school curriculum.

Leave a comment

This website uses cookies to improve your web experience.