Material’s use of AI is focused on security, pragmatism, responsibility, and transparency. This is the first in a series of blogs outlining how Material’s ML models work, and why we use them.
The state of enterprise and consumer AI
Whether detecting patterns in otherwise-unremarkable enterprise data, summarizing meeting notes, or identifying those weird plants growing in your backyard, AI in all its various manifestations has permeated our professional and personal lives with incredible speed. Blink too long or too often and you can miss staggering developments in technology and capability.
The promise and potential of AI are well-understood at this point. But whether or not it’s currently useful and secure right now in all of its various forms is a very different question. Depending on the application and the specific model being used, AI can be anything from mind-blowingly helpful to laughably inept.
That is to say: contemporary AI – taken as a clumsy aggregate approximation of the entire market – exists in something of a quantum state across the entire hype cycle. It is simultaneously riding the peak of inflated expectations, sitting comfortably atop the plateau of productivity, and wallowing in the trough of disillusionment.
Anyone who’s attended any cybersecurity or tech professional conferences or sat through more than a few vendor pitches over the last 12 months can be forgiven for being stuck in the trough. Playing an “agentic AI” drinking game at RSA would have been deadly several times over.
But growing pains and naked opportunism aside, there remain abundant opportunities for safe and effective use of ML models right now. And the potential for future growth in the short- and long-term completely justifies lingering inflated expectations. In fact, given the uncertainty of the ceiling for AI and ML development, it can be hard to tell exactly which expectations are inflated and which are completely justified.
Material Security’s approach to AI
Material Security has been using artificial intelligence and machine learning models under the hood of our cloud office security platform for some time. It plays a critical role in our phishing detection and other elements of our platform. While not a panacea unto itself, when combined with expert human threat research, organizational context, and other layers of defense, it is an incredibly powerful tool.
But as a security company, our primary goal is to deliver solutions to our customers that increase their security posture and harden their environments pragmatically–without introducing any new flaws or vulnerabilities. To that end, we’ve always been very deliberate about how and where we deploy ML models.
Material's approach to AI focuses on building and using effective, trustworthy machine learning models in a way that is secure, transparent, in compliance with all applicable regulatory requirements and best practices.
Our use of AI and ML center around several key tenets:
- Security - Our customers rely on us to protect their critical digital assets and most sensitive data. This demands that any system we build or deploy within our production environment is built with security top of mind.
- Competence - ML models can introduce significant complexity to software–to justify this potential, those models must deliver results above and beyond what can be accomplished with traditional programming methods. We only deploy our proprietary models and use third party models if they display high competence: in other words they perform above and beyond other methods we could use to achieve the same goals.
- Reliability - It’s not enough to build something that’s occasionally great at what it does–for systems that support the security and availability of critical data for our customers, they need to be continually great. Incorporating AI into our platform in a way that is safe and reliable is critical.
- Alignment - We build our models and use machine learning in alignment with our goals as a company and the needs of our customers–to enable a pragmatic platform that delivers effective security while saving security teams time and reducing user friction. We aren’t wasting resources deploying AI for tangential use cases or simply tacking an aftermarket LLM on to our product just so we can toot the AI horn.
- Transparency - Our customers trust us with their most sensitive data, they deserve to know how it’s being handled and processed. They deserve to know what’s going on behind the scenes, and have choice and control over their own decisions based on that knowledge.
As a security company, we have a responsibility to our customers to ensure everything we put into production is safe, secure, and reliable first and foremost.
To that end, Material's AI applications currently focus on improving detection capabilities across the cloud office environment, improving the accuracy and speed of phishing detection, streamlining user report triage, employing natural language classifiers to understand complex content, and more.
Material classifies its use of AI as "Limited Risk or Minimal Risk" under the EU AI Act, emphasizing transparency in AI deployment. We have established internal compliance measures, including AI governance and risk management procedures, AI ethics protocols, and clear roles and responsibilities for various teams involved in AI development and customer interaction. Material obtains customer consent before deploying ML/AI, maintains thorough model development processes for explainability, and adheres to data protection regulations like GDPR.
Conclusion
AI is an incredibly exciting tool, and its effective use unlocks powerful use cases, capabilities, and time-saving efficiencies. But for the time being, it remains just one among many tools at our disposal for securing our customers’ email, data, and accounts.
In the interest of fulfilling one of our central tenets and providing transparency into our use of ML models, this is the first of a series of blog posts outlining some of the ways we use AI in our platform, why we use the models we use, how we ensure their competency and reliability, and more.
And if you’d like to see what it looks like in practice, contact us for a demo today.