Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents
Folks who’ve worked in startups know the special vibe that surrounds Hack Week. Slack gets quieter. Conference rooms become scarce. And if you’re really lucky, the margarita machine makes a happy hour appearance next to the coffee urns.
Suddenly, the ideas that have been sitting in the “we should really…” category turn into, “wait, is that in prod?”
This year’s Hack Week at Material Security had a theme: what becomes possible when AI agents are real teammates? Not autocomplete. Not chatbots. Teammates.
The results were equal parts impressive, slightly alarming (in a good way), and deeply inspiring. Let’s walk through some examples of what my colleagues built to give a flavor of what’s possible! While this isn’t a promise to ship specific features, it’s a fascinating peek behind the scenes at how Material’s team members think about Material’s capabilities.
File Remediation, But Make It One-Click
Sensitive file remediation today can feel clunky, especially when it comes to reaching out to users to ensure they’re sharing files appropriately shared with people outside the company.
That’s why Team Slacktivists set out to build a first-class Slack app that:
- Notifies file owners of over-permissioned shares directly in Slack
- Lets them approve the share, revoke access, or request help
- Executes the action with a single click (by the user–not the security team)
It’s an app that turns what might be an awkward exchange with a security analyst into a smooth experience that satisfies data governance needs without making extra work for the SOC.
The Slacktivists even drafted a Slack Marketplace listing to speed the process of potentially getting the app into the hands of Material’s customers. The “OMG we should ship this” energy in the room was real.
Ask Jeeves, But Security-Savvy
Material’s documentation is good. But our search experience? VERY literal.
That means:
- Users don’t always know our internal taxonomy.
- They search for the wrong term.
- They open a support ticket, which results in a human finding a doc and sending it to the user.
Enter AI-powered, plain-language doc search, built on GitBook Assist in just a couple of days.
Now users can ask:
- “How do I update the status on an issue?” (Turns out they meant “classification.”)
- “How do I mark it safe?” (Our original search experience wouldn’t have found the right doc)
- “How do I keep someone from having to authenticate all the time?” (The new search understands the user is talking about our ATOR feature and gets to exactly the settings that need to be configured)
The assistant:
- Answers in plain English
- Corrects terminology gently
- Links to cited sources
- Suggests follow-ups
- Educates while solving
You no longer need to know how we think in order to get what you need. And yes: we shipped this one! Users can access the assistant today in Material’s documentation center.
Detect Attackers, But Do It by Writing Style
What if we detected attackers not by what they write, but how they write?
Team Write Stuff explored the possibility of doing exactly this. It’s a complex problem to tackle, because writing style changes based on who the person is communicating with, the platform they’re using to communicate, and factors like how many cups of coffee they’ve had (ok, I made up that last one).
This project set out to fingerprint writing style based on characteristics including:
- Topic normalcy
- Character-level typing patterns
- Grammar-word usage
- Lexical habits
The goal was to build a baseline from sent emails and analyze new ones with a confidence factor. Math flags outliers. Only suspicious ones get escalated to an LLM.
While the exploration is in its infancy, it was an exciting look into what’s possible with advances in AI. It’s adaptive. It’s subtle. It’s extremely cool.
Chief of Staff, But Make It LLM
It wasn’t just engineers who got in on the Hack Week action. Our People Ops leader created an agent that prepares a summary of the most important leadership topics every week.
Before the agent, this process was incredibly manual and relied on static dashboards and word of mouth to understand what topics to elevate to the weekly executive leadership meeting. The process was fine, but it felt too subjective.
Enter the LLM Chief of Staff.
It takes multiple inputs from across systems, across functions:
- Customer signals (Pylon)
- Product execution (Linear)
- Revenue (HubSpot)
- People (HiBob)
- Finance (weekly finance report)
- Zoom transcripts from key team meetings (team priorities)
It then evaluates these inputs through the lens of shared company priorities to tone down subjectivity like recency bias or strong personality dominance.
The goal of the LLM Chief of Staff is to get to clarity, not consensus.
This isn’t just reporting on a dashboard. It’s signal detection across the business.
The Bigger Picture
The most recent Hack Week wasn’t about “AI features.” It was about reducing the friction between intent and execution. I only mentioned a handful of projects, but each team taught us that so much is possible:
- Migrations that were too risky? Now feasible.
- Onboarding that took weeks? Now minutes.
- Alerts that overwhelmed analysts? Now contextual investigations.
- Rules that required back-and-forth? Now self-generated and backtested.
What impressed me most wasn’t the tools.
It was how quickly teams learned to compose agents, separate discovery from decision, and keep humans in the loop—strategically
We’re not replacing humans. We’re raising the floor on what a small, focused team can accomplish in a week.
If this is what happens in five days with scratch-built agents, I can’t wait to see what happens when these patterns become muscle memory.

