AI is becoming ubiquitous in our lives today. You’ve probably seen it in your personal life by talking to your Alexa, chatting with ChatGPT, or speaking with a conversational IVR or voice agent after calling into a business’ support line.
If you’re reading this, chances are you’ve seen it in your professional life too. Maybe you built that intelligent contact center solution or are actively developing the next-wave of AI communication tools yourself.
In any case, it’s becoming increasingly obvious that we’re at a critical inflection point. AI isn’t just a buzzword anymore. It’s becoming a fundamental part of how humans and machines communicate and collaborate.
And the key to this transformation is agentic AI.
So, what is agentic AI?
Traditional AI and automation handle routine, predefined tasks. Agentic AI, on the other hand, acts. It doesn’t just respond to prompts, but rather can take contextual understanding and use it to make decisions or trigger real-world actions.
To illustrate, let’s return to the example of a voice agent in a contact center. Traditional models for customer chatbots had to be pre-programmed and look for very specific responses in order to determine the next, sequential step. Did the user say “password”? Great, let’s direct them to password reset instructions! But, uh oh, this was actually an issue of a “stolen password.” Now the customer is frustrated, the call is being forwarded to Tier 1 support for triaging and additional hand-off, and the list of key phrases and intents needs to be updated. Oof.
With agentic AI, agents can act more autonomously. They don’t need as many instructions, and can also work with backend systems to take action on behalf of the user. So in this scenario, the voice agent would be able to detect the intent to report an issue then prompt the user to change their password, freeze the account, and intelligently route the call to the company’s fraud team for further action.
In short, Agentic AI bridges the gap between intelligence and action.
Obstacles limiting the move from intelligence to action
Even the most powerful large language models (LLMs) have limits. On their own, they’re limited to their training data, unable to access real-time information or take meaningful action.
This creates gaps:
- No access to live data or events
- Agents can’t trigger workflows
- Businesses see limited impact
To move from intelligence to true agency, we need a standard way for AI systems to interact dynamically and securely with existing tools and APIs.
Enter Model Context Protocol (MCP)
Model Context Protocol (MCP) is an emerging standard for connecting LLMs with third-party data sources and tools. MCP makes interoperability possible by enabling communication between AI models and platforms like telecom APIs, business systems, or CRM data stores.
For example, OpenAI has recently released a new feature in ChatGPT called Apps that lets users interact directly with platforms like Booking.com, Figma, Spotify, and Zillow. Powered by MCP, this feature lets the LLM engage with connected tools to answer questions or take action. For example, with the Spotify app, ChatGPT can create a new playlist directly in your Spotify account based on the prompt “Create a 2 hour roadtrip playlist with peak autumn vibes that I can listen to on my drive up to the mountains this week.” (For real! See for yourself.)
Now, let’s zoom into communications.
Practical applications
At its core, MCP connects AI agents with external systems. This makes its potential as broad as the services you choose to integrate with. When designing for MCP-powered use cases, it helps to think in terms of two primary categories: instructions and workflows.
Instructions
Instruction-based prompts allow you to give explicit, conversational commands on what actions to execute.
For example, “Create a report on all phone numbers available for purchase in North Carolina with the area code 919.”
This approach gives you the most control over the steps that the agent is executing via MCP. It’s ideal for internal operations or individual actions rather than multi-step events.
Using the Bandwidth MCP Server Package, you could perform tasks like:
- Lookup information about phone numbers, such as line type and number status, to validate numbers, reduce rejections, and predict costs
- Send MFA requests or custom text messages using SMS, MMS, or RCS
- Pull inventory, usage, and billing reports to stay on top of your communications
And that’s just a starting point.
Workflows
Workflow-based prompts go a step forward by triggering an MCP server as a part of a sequential set of steps.
For example you could use a CPaaS MCP to enable a scheduling agent to send confirmation messages over text. This one action may include 3 distinct steps:
- Conduct a number lookup to check if the phone number can receive a text message
- Select RCS or SMS based on number type
- Send an automated confirmation over text
This is where the flexibility of MCP really shines. By combining decision-making and execution in one smooth process, MCP brings intelligent automation to real-world communication flows. And the best part? It’s all done through one integration.
Key considerations when implementing an MCP server
As an emerging protocol, MCP is still new territory for a lot of developers. While it’s opening up exciting possibilities, it also comes with some growing pains—especially when it comes to security. There’s still work to be done and best practices to be established around how to deploy MCP servers safely in real-world environments while protecting end-user data.
Below are several important considerations to keep in mind as you explore MCP adoption and agentic AI development.
Hosting model choices
Local hosting
Local MCP servers, as you may have guessed, run in the same local environment as your AI application. User credentials are managed locally in the form of environment variables or encrypted local files to be used at runtime, simplifying authentication. This architecture is often structured as a one-to-one pairing, though some configurations can reuse a local MCP instance across agents.
For use cases where access to local files is important, a locally hosted MCP server offers control and privacy but may introduce resource overhead and end-user configuration requirements.
Remote hosting
Implementing a secure, remotely hosted MCP server is a bit more complicated, resulting in an architecture that can manage sessions for multiple clients at once, commonly using OAuth or similar token-based authentication. Since configuration is handled by the platform or service integration, authentication and initial setup is often simpler for the end user, but more complex for the AI engineer.
The SaaS industry is moving quickly to offer their customers remote MCP access, and the payoff could be huge in terms of scalability. Think about our example from earlier: ChatGPT Apps follow the MCP standard, connecting remotely to partner-hosted services like Zillow or Spotify, without users needing to host the MCP servers themselves.
From a platform perspective, investing in the development of a remote MCP server unlocks virtually unlimited possibilities for integration with web-based AI applications. However, from an enterprise perspective, while remotely hosted MCP servers offer easier integration, they may carry potential security and compliance risks due to evolving standards, and overall, remote MCP servers remain a more limited option in the industry today.
Responsible and ethical use
As companies race to take advantage of AI advancements like agents, trust has to come first.
According to Salesforce’s State of the AI Connected Customer 2025 report, 61% of customers say it’s more important than ever for companies to be trustworthy when using AI—but fewer than half actually trust brands to use it responsibly. And even though 71% of organizations report having a dedicated AI governance function in place, 67% admit they’re rolling out AI tools without the governance structures needed to manage risk.
Building trust into your AI-powered communications solutions isn’t optional. It means:
- Designing for transparency so users understand when and how AI is acting on their behalf
- Keeping humans in the loop for critical or high-impact decisions
- Understanding agentic workflows so you know where and how data is accessed
- Ensuring a solid foundation with access controls and other security protocols
- Making it clear how data is used and protected
Responsible AI isn’t just a compliance box to check. It’s what will set apart solutions that last from those that don’t.
Data access and security
Since MCP enables real-time interoperability with business data, it’s crucial to establish strong data governance practices from the start, including access control, auditability, and authorization management.
Here are some best practices that can help secure your implementation:
- Always follow standard OAuth 2.0 security best practices
- Carefully consider parameter or token passthrough via MCP. You’ll want to ensure your MCP servers verify all incoming requests and bind session identifiers to unique user identifier(s).
- If not validated, this scenario can lead to the potential bypass of critical security controls and reduce auditability
- Take care when integrating agents which vary by static and dynamic identity types – also referred to as service and user accounts – to avoid scenarios where one agent acts on behalf of another with different identity scopes (a.k.a. the “confused deputy problem”)
- For example, a user-level authenticated agent calling a second agent with admin-level privileges via MCP could be used to broaden access beyond its intended privileges.
- When hosting MCP Servers locally, isolate them in dedicated processes or containers with restrictive permissions (sandboxing). Apply the principle of least privilege to all connected services and dependent resources
- Leverage MCP’s built-in security primitives to define the operational boundaries of MCP
- Scopes: User-approved permissions that determine which tools and data the agent can access via an MCP server
- Roots: URI patterns that specify the boundaries for accessible resources and prevent unauthorized traversal
- Thoroughly test your MCP implementation for security and access control edge cases
- Validate that token handling, user authorization, and permission boundaries behave as expected under real-world conditions
- Include negative testing for invalid or expired tokens
When handling communications data, developer teams must treat both customer and internal operational data with care. Protecting sensitive information isn’t just about compliance. It’s about trust, which is the foundation of every customer relationship.
Building and embedding agentic AI capabilities
Consumers are still navigating their comfort levels with AI, let alone agentic AI. For example, while 38% of customers feel uncomfortable about AI providing financial advice, that number rises to 58% when asked about AI making financial decisions on their behalf.
Before rolling out new capabilities at scale, it’s worth taking a hard look at your customer-facing impact. Ask yourself how much AI your users are truly ready for, and make sure your deployment strategy builds trust, not uncertainty.
When you’re thinking about where to start building and embedding agentic capabilities, it’s smart to reflect on initial AI deployment strategies. In the early waves of AI adoption in the contact center, companies often started with Agent Assist so they could test AI with internal teams before putting it directly in front of customers. You can take a similar approach here: start with internal operations, validate performance, then expand outward.
Here are some practical next steps to guide that journey:
- Start small. Begin with a simple operations use case, such as pulling voice quality metrics for predictive maintenance.
- Add context and skills. Introduce MCP to pull relevant information and orchestrate tasks between your AI and existing systems, elevating from intelligence to action where helpful.
- Iterate and expand. Measure impact and gradually extend into broader internal and customer-facing applications.
Things to ask yourself
Finally, here are some questions to ask yourself as you consider how you could enhance your communications with Agentic AI using MCP:
- Where could agentic AI create the most value in my workflows today?
- What data or tools should my agents connect to through MCP?
- How can I design for transparency and control from the start?
- What guardrails should be used to ensure consistent, reliable and ethical outcomes?
Conclusion
Agentic AI is reshaping how communication works at every level, turning static interactions into intelligent collaborations. For developers, this is the moment to define that future.
At Bandwidth, we’re here to help you bring that future to life with secure communications APIs, trusted infrastructure, and deep expertise in voice and messaging innovation.