Skip to main content

Author: Elianne Anemaat (Senior Manager, Public & Society)

The Sociaal-Economische Raad (SER) recently published a comprehensive report on the impact of artificial intelligence on work and society. Despite its relevance, the report has received surprisingly little attention. It provides a clear and accessible overview of the social and economic implications of AI and offers practical recommendations for how we can respond.

This article highlights the SER’s four key recommendations, ranging from AI innovation and strategic autonomy to fair deployment and learning, and shares practical reflections, advice, and insights from Elianne Anemaat (Senior Manager, Public Sector at ADC Consulting), based on her experience in AI and public services.

 

  1. Invest in AI Innovation and Adoption

We Are Falling Behind

Let’s start with the first recommendation: “Invest in AI innovation and adoption”, because we are, indeed, falling behind.

Yes, we hear this more often. But beyond catching up on R&D, the SER rightly emphasises the need to strengthen our strategic autonomy – meaning our ability to choose European or Dutch AI solutions instead of relying entirely on U.S. or Chinese providers, with all the associated risks. That only works if such alternatives actually exist.

“We Don’t Have the Capacity to Change”

In conversations I’ve had lately, people often say: “Sure, I see why this matters, but in practice there are no real alternatives. We’re dependent on Microsoft and don’t have the time, money, or energy to change that.” I get it – I’m not throwing out my Office suite either – but I think we forget that digital autonomy doesn’t necessarily require tearing down the whole house.

Consider Small Steps Instead of Big Ones

Take the large language models powering many GenAI tools. We’re past the peak OpenAI hype. Open-source alternatives like LLaMA are available for local use, and European developers like Mistral are creating new reasoning models as we speak.

Strategic decisions go beyond which model you use. What infrastructure are you running it on? Azure, AWS, and Google aren’t the only options anymore. There are now plenty of (private) cloud solutions within the Netherlands and Europe that keep your data local. The same goes for deployment and MLOps: modular, self-hosted platforms are increasingly viable.

My point is: strategic autonomy is not “all or nothing”. Every step, no matter how small, towards more control and less dependency is important. So instead of defaulting to the biggest provider, ask: where could we make a smarter choice?

 

  1. Put Decent Work at the Heart of AI Deployment

This is Bigger Than IT

The SER’s second recommendation, “Put decent work at the heart of AI deployment“, highlights how AI choices impact not just work quality but also employee well-being. If employees aren’t involved, the SER warns, the outcomes can be ineffective or even harmful.

That makes sense. Nobody likes changes they had no say in. Yet in practice, this is often approached in an awkward way. A common pitfall I see is that organisations treat AI as an IT issue. Don’t get me wrong – IT is crucial for implementation – but it shouldn’t be the one driving the vision.

Would You Let the Plumber Design the Kitchen?

Think of it like this: would you let the plumber design your kitchen? Yes, you need running water and solid plumbing, but if the plumber chooses where the stove goes or how much counter space you get, you’re bound to make the person who actually cooks very unhappy.

It’s the same with AI. If employees aren’t given insight into how AI will change their work, resistance will follow. And that makes sense: in times of change, people look for something to hold onto.

Dialogue as a Basic Requirement

That’s why structured dialogue with employees should be a baseline requirement for AI innovation. It reduces resistance, yes. But more importantly, it leads to better results. As developers, managers, and consultants, we’re all prone to tunnel vision. Collaborating with employees to identify value, potential, and risks dramatically increases the odds of a successful outcome.

The SER refers to ‘guidance ethics’ developed by ECP Platform voor de InformatieSamenleving – an approach I’ve used myself. It helps translate shared values into practical design decisions in a very accessible way.

Leaders: Step Up

In this context, I would also like to appeal to managers within organisations who are working with AI: don’t leave this to ‘the tech people’. Create space for joint exploration, discussion, and decision-making. That includes your team – and yourself.

 

  1. Keep Learning and Developing Together to Leverage AI

Beyond that One-Time Training

In all honesty, the third recommendation from the Sociaal-Economische Raad (SER) (“Keep learning and developing together to leverage AI”) sounds a bit obvious. After all, learning and training is a basic requirement for any technological development, right?

The SER explicitly places the responsibility for developing AI skills with employers – and rightly so. But I can already foresee a pattern emerging: organisations send employees to a generic “Working with AI” course hoping it will boost AI literacy and perhaps even help guide future decisions. And then… they wait for months before actually applying it.

Learning by Doing

I’ve delivered many data and AI training session over the years. They are certainly useful (if I do say so myself) to create general awareness and foundational knowledge. But if there is one thing I have learned, it is that training courses are only effective if people can immediately apply what they’ve learned.

Recently, I helped several organisations design and implement their first GenAI solution. What stood out to me was the steep learning curve for (non-technical) teams. From day one, they co-created the solution’s purpose, discussed technical feasibility with specialists, tested early versions, and gave feedback. I’m convinced they learned more about AI in this process than I could have ever conveyed in a training.

The thing with AI is that it cannot be fully understood from theory alone. Much of what AI can or cannot do is discovered by doing it yourself: trial and error. Given the pace of AI evolution, this is a continuous learning process.

So let’s not pretend that one-off training makes teams “AI-ready.” AI is like riding a bike: you learn by doing. With support, sure – maybe even training wheels – but you need to get in the saddle yourself.

 

  1. Anticipate the Potential Distributional Effects of AI

Who Benefits, and Who Doesn’t?

The final recommendation, “Anticipate the potential distributional effects of AI“, urges us to pay attention to how AI might shift power, responsibilities, and income. In short: who gains from AI, and who doesn’t? That might sound abstract, but it’s already visible on the ground.

Ironically, the organisations with the most to gain from AI, such as municipalities, healthcare bodies, and implementing agencies, often have the hardest time getting started. Not because they don’t see the value, but because they lack the bandwidth: no capacity, no budget, no governance structure.

And so, the gap grows. The digital frontrunners – already mature, already experimenting – accelerate further. Others wait. Not out of unwillingness, but out of inability. And that seems risky to me, because these lagging organisations have just as much to lose as they have to gain.

Inequality in Public Services

If organisations only get in when the playing field has already been largely determined, there is little room left to set your own direction. Waiting can feel safe: “Let’s see what others do first“. But by the time you join, the rules of the game may already be set. You’ll be stuck fitting into decisions that were made elsewhere.

If that gap widens, something much more fundamental will arise: unequal access to services. Imagine one citizen gets a personalised response within days, while another waits weeks. One job seeker gets tailored suggestions and guidance; another receives a generic list of vacancies. Seemingly small differences in AI capability can translate into real disparities in speed, access, and relevance. And that’s a problem, because public service should be equitable, no matter where you live.

So, how do we close that gap? How do we make sure the benefits of AI reach everyone?

Continue the Conversation

Interested in learning more about how your organisation can get started on its AI Transformation journey? Reach out to Elianne Anemaat (Senior Manager, Public & Society).

Elianne Anemaat

Elianne Anemaat

Senior Manager, Public & Society

Get in touch

Stay Updated

Interested in the latest case studies, insightful blog articles, and upcoming events? Subscribe to our monthly data & AI newsletter below!

Gallery of ADC