Salesforce Outlines 7 Opportunities to Deepen Trust in AI in Response to White House Executive Order
Nov. 1, 2023 — It has been a historic week for addressing the rapid development of artificial intelligence (AI), with proactive steps to mitigate risk. The White House released an AI Executive Order, marking the most significant action a government has taken on AI to date. In addition, the G7 also agreed this week to a landmark code of conduct, outlining how companies should mitigate risks as they develop advanced AI.
It’s energizing to see governments take definitive and coordinated action toward building trust in AI. From the the EU’s AI Act in 2021 to this week’s U.S. Executive Order, governments recognize that they have an essential role to play at the intersection of technology and society. Creating risk-based frameworks, pushing for commitments to ethical AI design and development, and convening multi-stakeholder groups are just a few key areas where policymakers must help lead the way.
How the Executive Order aligns with Salesforce’s AI approach
For years, Salesforce has understood and helped unlock the incredible potential of AI for the enterprise, but the Company has also seen a trust gap take shape. Our customers — including 90% of Fortune 100 companies — are enthusiastic about AI but concerned about risks like data privacy and data ethics. Businesses are eager for guardrails and guidance, and looking to government to create policies and standards that will help ensure trustworthy AI.
Salesforce has been active in creating guardrails in line with what the White House has proposed, including:
- Privacy: Like the White House, Salesforce has long called for comprehensive data privacy legislation. This week’s Executive Order goes a step further, calling for privacy-related research, guidance to federal agencies, and preservation of privacy throughout AI systems training.
- Safety: Salesforce is glad to see that the National Institute of Standards and Technology (NIST) — whose AI Framework informed much of the White House’s Executive Order — will be setting rigorous standards for red-team testing to ensure that AI systems are safe, secure, and trustworthy.
- Equity: It’s great to see the Executive Order prioritize equity by addressing algorithmic bias and discrimination. The order will also provide guidance throughout the criminal justice system, federal benefits programs, and with federal contractors to ensure that AI is used safely and fairly.
- Global Cooperation: Salesforce regularly provide guidance and expertise to governing bodies around the world at national and multilateral levels. The Executive Order reinforces the need to work with other nations and practitioners to advance safety and responsibility, as well as promote AI’s benefits.
- Government Adoption: The Executive Order highlights that AI can help the government better serve its constituents but also outlines the need for guidance of usage to protect privacy and security; the need for AI talent; and the ability to procure technology efficiently. Salesforce has been working with government agencies to utilize AI for modernizing public service.
At Salesforce, Trust has always been the number one value. The Company has spent over a decade investing in ethical AI, both in business and with customers. Salesforce’s Office of Ethical & Humane Use has been guiding the responsible development and deployment of AI for years — first through Trusted AI Principles and more recently with Salesforce’s Guidelines for Generative AI. The Company has in-house AI researchers, more than 300 AI patents, and is actively investing in AI startups through a $500 million ventures fund.
It’s not just about asking more of AI, it’s also about asking more of each other — our governments, businesses, and civil society — to come together and harness the power of AI in safe, responsible ways. Salesforce doesn’t have all the answers, but the Company knows that leading with trust and transparency is the best path forward.
In that spirit, here are seven ways to build trust in AI:
1. Companies should protect people’s privacy: Salesforce believes companies should not use any datasets that fail to respect privacy and consent. The AI revolution is a data revolution, and society needs comprehensive privacy legislation to protect people’s data and help pave the way for other AI legislation.
2. Companies should let users know when they’re interacting with AI systems: That means helping users understand when and what AI is recommending, especially for high risk or consequential decisions. It must be ensured that end users have access to information about how AI-driven decisions are made.
3. Bigger is not always better: Smaller models offer high quality responses, especially for domain-specific purposes, and can be better for the planet. Governments should incentivize carbon footprint transparency and help scientists advance carbon efficiency for AI.
4. Policy should address AI systems, not just models: A lot of attention is being paid to models but to address high risk use cases, we must focus on the whole layer cake: data, models, and apps. Every entity in the AI value chain must play a role in responsible AI development and use.
5. AI is not one-size-fits-all: Governments should protect their citizens while encouraging inclusive innovation. This means creating and giving access to privacy-preserving datasets that are specific to their countries and cultures.
6. Responsibility today fosters safety tomorrow: Many talk about the risks of advanced AI as if they are separate or in conflict with addressing the risks that AI poses today. But solutions build on each other, providing us with technical know-how and muscle memory to handle new risks as they emerge.
7. Appropriate guardrails unlock innovation: The first questions our customers ask about AI is always about trust and control of their data. Today’s businesses worry that AI may not be safe and secure, and they want governments to prioritize data privacy and create standards for AI systems transparency.
It is exciting to see governments and businesses from around the world working together to navigate a future that leverages the power of AI while ensuring trust is at the center.
Go deeper: Learn more about trusted AI at Salesforce, including our policy advocacy and the tools Salesforce is providing to employees, customers, communities, and partners to develop and use AI safely and responsibly:
- Read Salesforce’s five guidelines for responsible generative AI development
- Learn about Salesforce’s advocacy for a standard privacy law
- Check out a recent post that shares Salesforce’s view on AI regulation
Salesforce empowers companies of every size and industry to connect with their customers through the power of AI + data + CRM. For more information about Salesforce (NYSE: CRM), visit: www.salesforce.com.
Source: Paula Goldman and Eric Loeb, Salesforce