When digital ethics, CDR, or values in the digital space are discussed, the question 'Why should we start now - it's all still up in the air?' comes up quite often. This impression is both correct and incorrect. It is true that laws and standards are currently being developed worldwide, particularly at the European level, to provide digital technologies with a framework for development. Certainly, much is not yet clear in this area:
Which values will be included in the standard lists for implementing the AI Act?
After the intense discussions, which systems will really still be classified as High Risk AI?
Which auditing system will be used by the EU?
At the same time, these questions are very specific to the EU and AI, and even if these questions are answered, it will not exempt companies from having to deal with accountability for their products and business models.
It is a misconception that once 'the list' of ethical values is there, the checklist to check off during product development, that the products will then immediately and automatically be better for our society. This is about nothing less than how we want to live together in the future. This is not a question that should be left to legislators or standard bodies in a democratic society. If we see regulation only as a top-down constraint, it will lead to a lot of frustration and limit product development. If we see the current discussion as an opportunity to really discuss the background of why we are driving technologies and innovations in general, we are capable of taking action ourselves. There are countless examples that have shown in recent years that quick, or perhaps hasty, launches of software have a lot of negative potential. In the same way, they can be used for a better life. The potential is definitely there, but only if we ask the right questions at the beginning of the innovation or product development cycle. This is not about burdensome bureaucracy.
No modern development team would think of developing a product or a proper ideation process, as well as testing and incorporating feedback from test users. It has become natural to ask: What do our users want? Likewise, it is completely natural in both startups and large companies to ask 'why'.
Why are we doing this?
What is our mission?
What are our values?
If you take this approach seriously, given the scope of innovation in the digital space, you can't avoid asking:
What are the consequences of our innovation?
What do we want to achieve positively for society?
For now, you don't need an external standard list or a list of values approved by the EU. Of course, these can help in checking internally developed value systems and objectivity. However, they are not decisive for the development process. If management and project teams deal with these issues intensively, they will develop innovative, truly helpful products and systems. The prerequisite for this is the willingness to invest time and money in this openness in the development process. Expertise is also needed in the company to resolve value conflicts that inevitably arise. For these challenges, already developed corporate digital responsibility systems at the operational level and ethical guidelines are then helpful. But just as a second step instead of as a starting point. There is a lot of brilliant knowledge out there that can help you act in a digitally responsible way. Unfortunately, much of it is still not sufficiently known. Over the next few weeks, I'll be changing that and explaining existing systems to you in an accessible way.
With my Responsible Innovators Kickstart Guide, I want to enable you to implement the knowledge and access the ecosystem in digital ethics and responsible innovation. Let's design change for a better future together!