Trust and AI at Bandwidth: Our team’s approach

Let’s face it: there’s a lot of talk about AI and it’s hard to know what to trust or who to believe. Should I opt in or opt out? Is it an economic boom or bust? The end of the world or just the start of an era? A time-saver or a slot machine? It’s enough to make anyone dizzy (especially privacy pros like me).

No question, the AI landscape is evolving quickly and there is always more to learn. Bandwidth is committed to that learning process: leaning in to curiosity, challenging ourselves, innovating in our areas of expertise, and improving where we aren’t as strong. A balance of growth and governance. And our approach to trusted AI has evolved through authenticity, diligence, and empowerment.

Authenticity over hype

Bandwidth’s strategic vision for AI innovation is always in service of our values, including transparency and trust, and the value propositions we offer our customers. In our products, this means prioritizing flexibility and compatibility, with a vendor-agnostic platform that allows customers to choose and integrate with their preferred provider and with us as their preferred carrier. It means looking internally for efficiencies and agentic operations that will enhance speed, clarity, and quality in the customer experience and our network performance. And it means investing in problem areas where we don’t have the answers, so we can be part of the solution, as we are with our Innovation Studio.

To us, the transparent and trust-building approach is to do the work and offer that freedom of choice. We celebrate that what we are building will help our customers choose how they want to engage with AI, where they deploy it, and with what level of risk.

For savvy tech leaders, minimizing and mitigating risk is another key element of trustworthy AI, and it’s critical that we set the stage for their success through our own governance, compliance, and risk management programs.

Diligence by design

Behind every trusted product is a vigilant team of pros that help to keep it that way. And preparing for the puzzle of security in AI is another challenge our Bandmates met before they were asked. As generative AI tools have become more mainstream in SaaS products and coding and everything in between, our Information Security teams got to work early to adapt our vendor risk management, software release, pen testing, and other workflows to look for risk traps in AI deployment before they make it to the gate.

And since AI is only as powerful as its data, it’s crucial that our risk reviews are tied closely to the data involved in each product or vendor use case. Our Privacy and Security teams embed AI risk reviews in both our Vendor Risk Management process (for third-party tools) and Privacy & Security by Design progress (for development of new product features, services, and systems). Our goal is protection of our customers and our data at the outset, with careful consideration of key risk areas in AI: what problem are we solving? Is data used for training? Is there a human in the loop for key decision-making? Are the foundational models tested and safe? Is the environment appropriate and secure for the use case?

By layering in review to existing processes, we’ve made adoption easier and more efficient for our internal teams without sacrificing the rigor that’s necessary to meet our privacy and security standards and stay enterprise-ready in everything we do.

And to help facilitate information-sharing and problem-solving across teams that are struggling with the same hard questions (and some brilliant solutions!), we set up a committee of bright minds for that, too.

While there are plenty of unknowns in the world of AI deployment, we’ve built confidence that we’re asking the right questions, investing in key areas, and getting better every day. Our governance processes support confident deployment for our customers and empowerment for our Bandmates.

Empowerment in AI literacy

Around the world, Bandmates supercharge their productivity by using AI tools for efficiency and creativity. And it’s no accident, because a central part of Bandwidth’s strategic vision is to empower employees with AI tools and promote innovation at the grassroots level, with trust in and support for our Bandmates.

We believe that effective, targeted training is key to empower our employees to use AI responsibly and intentionally. Fortunately, we have a super-talented and proactive Learning & Development team who met that challenge before any of us even asked.

In addition to company-wide privacy training (of course!), our Bandmates benefit from a truly unique, iterative, and mandatory AI literacy training program. Our Learning & Development team designed this program to teach Bandmates how to engage with AI intentionally, effectively, and responsibly in their work; this foundational training is also a cornerstone in our preparation for compliance with relevant laws, including the EU AI Act. We’re also proud to say that it’s truly custom, with a tailored combination of mandatory modules, gamified learning opportunities, and ongoing fresh additions to keep the curriculum fun, relevant, and aligned with our company goals.

While this remarkable (and innovative) program puts us ahead of the curve toward AI literacy requirements under emerging laws, it’s even more remarkable because it wasn’t even created for that purpose. It was made to be useful. Just to help us be better. And that’s exactly what it does.

I am well aware that lawyers are famous for being the naysayers. But building trust in AI isn’t about saying yes or no. In my experience, building a compliance program to support AI requires constant collaboration with transparency, humility, and curiosity. Our customers expect the same in their suppliers and partners. When we put these principles into action, trust in AI looks like empowerment, diligence, and authenticity, and that’s exactly how we are approaching innovation and governance in practice at Bandwidth.