Get set to use everything in your membership in 2025

Responsible use and design

This article draws on wider ethical based approaches that leaders, policy makers and practitioners can use alongside the wider digital ethics collection to support the ethical and secure use of emerging technologies and data.

Authors and contributors: William Barker, Martin Ferguson

As we have seen from the Socitm’s Planting the Flag and Digital Trends analysis local public service leaders are increasingly called upon to make ethical decisions around the use and design of technologies and data. 

This page focuses on the closely related themes responsible use and design of technology and data. Highlighting in particular at the emerging resources coming from the work of the World Economic Forum in partnership with academic and commercial organisations around how to grow an organisational approach to Responsible Use of Technology. Taken together these tools can offer the basis for mapping and defining a set of supporting leadership approaches to compliment the wider digital ethics agenda

Responsible use of technology

Five traits of organisations that use technology responsibly

The WEF and partners have identified five traits shared by organisations that use technology ethically and are willing to consider the potential harm to humans and humanity as part of their decision-making process.

1. Technical knowledge

Ethical deliberations often start with societal principles and values (e.g. human rights, fairness, etc). However, to fully assess the implications of a new technology requires a proportion of an organization’s workforce to have a strong understanding of fundamental technology concepts.

Combining technical know-how with voices from diverse backgrounds provides a pathway for organizations to consider the impact of the technology they are using. Decision-makers need to know what their organization’s technology does and how it works.

2. Social responsibility

Organizations successfully incorporating ethics throughout their use of technology—workstreams and decision-making—are those who amplify their awareness and acceptance of the company’s role in society beyond economic transactions. These organizations engage with social enterprises, assume social duties and hold themselves accountable.

3. Foundation of trust

These organizations are known for creating corporate cultures that breed trust amongst co-workers and customers. They achieve this by honouring the ethical concerns and moral arguments of all individuals, in situations that affect them or in which they have expertise.

4. Ethical deliberation

Organizations that excel in applying ethics to technology have a deliberate approach to making decisions and forming policy: they rely heavily on data; involve people who will be impacted by their choices; consider the downstream effects of their decisions; and when possible, share the thinking behind those decisions. By engaging those involved, a consensus on the right course of action often emerges.

5. Leadership commitment

Executive sponsorship is essential for the responsible use of technology. Rapid technological innovation creates new opportunities and risks to society. Leaders who develop a practice of ethical leadership model and teach empathy; create a strong sense of community internally; adhere to professional ethics; and are systems thinkers are more likely to create a growth mindset in their organization’s culture and encourage ethical behaviour. Ethical deliberation requires constant, iterative learning.

Thought leadership reports

In support of the above approach, the WEF and partners has produced the follow thought leadership reports:

Responsible Use of Technology

Looks at the increasingly pervasive use of technology in our everyday lives has triggered a debate on how new and disruptive technologies – such as artificial intelligence (AI), robotics, 3D printing, internet of things (IoT), 5G, blockchain, quantum computing, autonomous vehicles, biotechnology, and nanotechnology – should be managed and governed.

Data Policy in the Fourth Industrial Revolution: Insights on personal data

Reviews the development of comprehensive data policy necessarily involves trade-offs. Cross-border data flows are crucial to the digital economy. The use of data is critical to innovation and technology. However, to engender trust, we need to have appropriate levels of protection in place to ensure privacy, security and safety. Over 120 laws in effect across the globe today provide differing levels of protection for data but few anticipated

Responsible Use of Technology: Case Study

This case study, highlights tools and processes that facilitate responsible technology product design and development. The document is the first in a series that seeks to investigate how companies have begun to incorporate ethical thinking into the development of technology. Its lessons can help organizations advance their own responsible innovation practices, and may even inspire others that have created new methods in pursuit of ethical technology to share their work, either in this series or elsewhere.

Ethics by Design: An organisational approach to responsible use of technology

Looks at an organisational approach to the responsible use of technology integrates key insights from psychology and behavioural economics with findings from market research and interviews with leaders of international organizations. It aims to help shape organizational design decisions to prompt better and more routine ethical behaviours.

The report promotes an approach that focuses on the environments that can lead ordinary people to engage in more ethical behaviours rather than relying solely on their personal character. It also outlines steps and recommendations for organisational design (see below) that have proven to be more effective than conventional approaches such as compliance training and financial compensation.

Ethics by Design: three critical components to this comprehensive approach

Source: Ethics by Design: An organizational approach to responsible use of technology

Attention: Timely, refocused attention on the ethical implications of the technology. Attention techniques and examples include reminders, checklists and frequent ethics refresher training – focus on methodology.

Construal: Individuals interpret their work in ethical terms. Construal techniques and examples include the deliberate use of ethically freighted language in mission statements – emphasis on culture. Leaders are responsible for promoting ethical decisions by providing the corporation’s vision, purpose and values. Companies have found success by framing issues beyond purely legal or regulatory compliance terms.

Motivation: Encouraging pro-social actions, setting social “norm nudges” and other culture-changing activities can be used to promote ethical behaviours. The culture of organizations positively influences motivation. It can be best sustained through robust, self-reinforcing incentives and operational structures, such as conscious community building within and across company teams and programmes to showcase ethically exemplary employees.

Co-designing digital interventions and technology projects with civil society

Paper available here: Co-designing digital interventions and technology projects with civil society

This paper explores the concept of co-design in partnership with civil society, beginning with COVID-19 technology interventions. It focuses on the role played by civil society in developing such technologies in collaboration with the private and public sectors.

While not a panacea, co-design is a practice that emphasizes how core values such as trust and empowerment can serve as a common language for meaningful collaboration. Co-design methods have the potential to be a first step towards building equitable relationships among civil society and the private and public sectors and can help address the power imbalances inherent in such collaborations.

Cooperative frameworks for partnering with civil society in the design of technology are needed to build trust between institutions and to steer natural friction towards constructive engagement. From a Civil Society aspect that can also help to inform local public service responses the report identifies the following key areas of focus:

  • Strategic engagement opportunities: Civil society actors need to be on the lookout for relevant engagement opportunities that will enable them to provide the right information at the right time to those designing and developing tech solutions. This can be achieved when engagement between civil society and government or tech companies is ongoing and long-term. This will enable civil society organizations to anticipate issues and provide constructive feedback over time, instead of engaging only reactively.
  • Strategic engagement formats: Civil society actors also need to position themselves as strategic engagement actors. The tone and manner of engagement can make a significant difference in whether their suggestions are taken onboard. Civil society actors often do not understand how the private sector works, and thus do not make adequate investment in translating their concerns into something that is achievable and relevant for private-sector companies.
  • Evidence before ideology: In order to engage effectively with tech companies and convince them of their expertise, civil society organizations need to use concrete, empirical evidence. Often, when engagement is based on ideological factors, it is more easily brushed aside by decision-makers. In gathering such evidence, information should be translated and tailored for decision-makers so they can process it more easily.
  • Independence: Since many civil society organisations working in the digital space get their funding from tech companies, it is important that they assert and demonstrate their independence from their financial sponsors. Strategies available to civil society organizations include diversifying funders, working through consortia and focusing on areas in which companies do not have an active commercial interest.
  • New capacities: Civil society organizations need to invest in people who understand both grassroots-level social justice and development issues, as well as those who can engage in conversations about technology design and policy.

Supporting tools and guidelines

WEF AI toolkit for senior leaders

While Artificial Intelligence is seen as holding the promise of solving some of society’s most pressing issues, it also presents challenges, such as inscrutable “black box” algorithms, unethical use of data and potential job displacement. Building on the work already done to guide senior leaders on how to tackle AI, this work will provide a practical set of tools that can organisational leaders understand AI’s impact on their roles, ask the right questions, understand the key trade-offs, and make informed decisions on AI projects and implementations.

Executive leaders are responsible for stewarding their organisations through the current period of unprecedented technological change and its attendant societal impacts. A practical set of tools can empower them to ask the right questions, understand the positive and negative aspects of using AI or machine learning in their businesses, and oversee the responsible operationalisation of AI in their corporations.

The WEF toolkit is designed to be aligned with c-suite executives’ responsibilities and interests. It addresses topics that are specific to the conventional leadership positions (e.g. CEOs, COOs, CFOs etc.), but also emerging ones (e.g. Chief Data Officer, Chief Digital Officer, Chief Technology Officer etc.). The toolkit (see below) also supports organisations in deciding whether and how to adopt particular approaches and in understanding the power of the technology to advance their business.

Source: Empowering AI Leadership

WEF toolkit modules links

Strategy modules

  • Brand: employing AI to maintain brand reputation. Using AI to manage brands, building public trust and protecting the company image; building brand reputation by developing, deploying or using AI to improve society.
  • Competition: exploiting AI to accomplish the organization’s mission. Impact on strategy, competition and industries; using AI to disrupt, compete and grow; anticipating risk.
  • Customers: strengthening customer relationships with AI. Value to customers; service improvements; customer needs and concerns; building and maintaining trust.
  • Cybersecurity: building resilience to AI cyber risks. Assessing and addressing security risks from AI; using AI to improve cyber resilience; integrating cyber resilience into strategy.
  • Operating model: using AI to improve processes. Process and operating model transformation and innovation; improving productivity and efficiency.
  • People and culture: making it possible for AI and people to work together. Human augmentation; culture and ethics for AI success; inclusion and diversity; staffing and skills; using AI in human resource management.
  • Technology: managing the implementation of AI. Building AI systems; drawing on existing IT investments; major purchases and partners; envisioning the future of AI technology.
  • Sustainable Development: harnessing AI to sustain our resources. Optimize the use of natural resources; reduce waste and carbon footprint; reduce AI’s own carbon footprint.

Control modules

  • Audit: complying with obligations. Financial controls and reports; compliance with laws and regulations.
  • Ethics: setting standards for proper development, deployment and use. Ethics principles for AI; establishing an ethics board; implementing AI ethics codes; risks from ethics failures.
  • Governance: structuring AI oversight. Setting governance responsibilities for AI; oversight responsibilities of board committees; responsibilities of ethics board. Risk: managing corporate risks. Risk exposure from AI; integrating AI into risk-management plans; using AI to manage risks effectively; crisis management.

Supporting modules

  • Responsibility: Oversight duties of board directors that are common to many or most of the modules.
  • Glossary: Explanation of AI terms

WEF AI government procurement guidelines

Governments are increasingly seeking to capture the opportunities offered by AI to improve public-sector productivity and the provision of services to the public and to stimulate the economy. AI holds the potential to vastly improve government operations and meet the needs of citizens in new ways, ranging from traffic management to healthcare delivery to processing tax forms. However, governments often lack experience in acquiring modern AI solutions and many public institutions are cautious about harnessing this powerful technology.

Overall, the guidelines aim to guide all parties involved in the procurement life cycle – policy officials, procurement officers, data scientists, technology providers, and their leaders – towards the overarching goal of safeguarding public benefit and well-being. The WEF has developed a supporting toolkit – AI procurement guidelines.

In addition, the WEF in partnership with the UK Office for AI has developed the Guidelines for AI procurement that addresses the ethical uncertainty around AI procurement so that officials feel equipped to use innovative technology whilst being able to mitigate risks. The guidance is aimed at public servants and commercial specialists procuring AI. The guidelines set a baseline for the harmonisation of AI procurement standards.