Go back

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

People
February 19, 2026
4m read
4m read
4m listen
4m watch
4m watch
Hack Week 2026 Recap HeaderHack Week 2026 Recap Thumb
speakers
speakers
speakers
authors
Kate Hutchinson
participants
No items found.
share

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

Folks who’ve worked in startups know the special vibe that surrounds Hack Week. Slack gets quieter. Conference rooms become scarce. And if you’re really lucky, the margarita machine makes a happy hour appearance next to the coffee urns.

Suddenly, the ideas that have been sitting in the “we should really…” category turn into, “wait, is that in prod?”

This year’s Hack Week at Material Security had a theme: what becomes possible when AI agents are real teammates? Not autocomplete. Not chatbots. Teammates.

The results were equal parts impressive, slightly alarming (in a good way), and deeply inspiring. Let’s walk through some examples of what my colleagues built to give a flavor of what’s possible! While this isn’t a promise to ship specific features, it’s a fascinating peek behind the scenes at how Material’s team members think about Material’s capabilities.

File Remediation, But Make It One-Click

Sensitive file remediation today can feel clunky, especially when it comes to reaching out to users to ensure they’re sharing files appropriately shared with people outside the company.

That’s why Team Slacktivists set out to build a first-class Slack app that:

  • Notifies file owners of over-permissioned shares directly in Slack
  • Lets them approve the share, revoke access, or request help
  • Executes the action with a single click (by the user–not the security team)

It’s an app that turns what might be an awkward exchange with a security analyst into a smooth experience that satisfies data governance needs without making extra work for the SOC. 

The Slacktivists even drafted a Slack Marketplace listing to speed the process of potentially getting the app into the hands of Material’s customers. The “OMG we should ship this” energy in the room was real.

Ask Jeeves, But Security-Savvy

Material’s documentation is good. But our search experience? VERY literal.

That means:

  • Users don’t always know our internal taxonomy.
  • They search for the wrong term.
  • They open a support ticket, which results in a human finding a doc and sending it to the user.

Enter AI-powered, plain-language doc search, built on GitBook Assist in just a couple of days.

Now users can ask:

  • “How do I update the status on an issue?” (Turns out they meant “classification.”)
  • “How do I mark it safe?” (Our original search experience wouldn’t have found the right doc)
  • “How do I keep someone from having to authenticate all the time?” (The new search understands the user is talking about our ATOR feature and gets to exactly the settings that need to be configured)

The assistant:

  • Answers in plain English
  • Corrects terminology gently
  • Links to cited sources
  • Suggests follow-ups
  • Educates while solving

You no longer need to know how we think in order to get what you need. And yes: we shipped this one! Users can access the assistant today in Material’s documentation center.

Detect Attackers, But Do It by Writing Style

What if we detected attackers not by what they write, but how they write?

Team Write Stuff explored the possibility of doing exactly this. It’s a complex problem to tackle, because writing style changes based on who the person is communicating with, the platform they’re using to communicate, and factors like how many cups of coffee they’ve had (ok, I made up that last one).

This project set out to fingerprint writing style based on characteristics including:

  • Topic normalcy
  • Character-level typing patterns
  • Grammar-word usage
  • Lexical habits

The goal was to build a baseline from sent emails and analyze new ones with a confidence factor. Math flags outliers. Only suspicious ones get escalated to an LLM.

While the exploration is in its infancy, it was an exciting look into what’s possible with advances in AI. It’s adaptive. It’s subtle. It’s extremely cool.

Chief of Staff, But Make It LLM

It wasn’t just engineers who got in on the Hack Week action. Our People Ops leader created an agent that prepares a summary of the most important leadership topics every week.

Before the agent, this process was incredibly manual and relied on static dashboards and word of mouth to understand what topics to elevate to the weekly executive leadership meeting. The process was fine, but it felt too subjective.

Enter the LLM Chief of Staff.

It takes multiple inputs from across systems, across functions:

  • Customer signals (Pylon)
  • Product execution (Linear)
  • Revenue (HubSpot)
  • People (HiBob)
  • Finance (weekly finance report)
  • Zoom transcripts from key team meetings (team priorities)

It then evaluates these inputs through the lens of shared company priorities to tone down subjectivity like recency bias or strong personality dominance.

The goal of the LLM Chief of Staff is to get to clarity, not consensus.

This isn’t just reporting on a dashboard. It’s signal detection across the business.

The Bigger Picture

The most recent Hack Week wasn’t about “AI features.” It was about reducing the friction between intent and execution. I only mentioned a handful of projects, but each team taught us that so much is possible:

  • Migrations that were too risky? Now feasible.
  • Onboarding that took weeks? Now minutes.
  • Alerts that overwhelmed analysts? Now contextual investigations.
  • Rules that required back-and-forth? Now self-generated and backtested.

What impressed me most wasn’t the tools.

It was how quickly teams learned to compose agents, separate discovery from decision, and keep humans in the loop—strategically

We’re not replacing humans. We’re raising the floor on what a small, focused team can accomplish in a week.

If this is what happens in five days with scratch-built agents, I can’t wait to see what happens when these patterns become muscle memory.

Frequently Asked Questions

Find answers to common questions and get the details you need.

No items found.

Related posts

Our blog is your destination for expert insights, practical tips, and the latest news in technology. Stay informed with our regular updates and in-depth articles. Join the conversation and enhance your understanding of the tech landscape.

blog post

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

Kate Hutchinson
4
m read
Read post
Podcast

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

4
m listen
Listen to episode
Video

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

4
m watch
Watch video
Downloads

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

4
m listen
Watch video
Webinar

Hack Week in the Age of AI Agents: What Happens When You Give Smart People Smart Tools

Material Security’s Hack Week in February 2026 focused on the productivity that’s unlocked by working with AI agents

4
m listen
Listen episode
blog post

Rethinking "Assume Breach": A Pragmatic Approach to Zero Trust in 2026

To make Zero Trust real in 2026, security must extend the "assume breach" mindset beyond the login screen to data at rest and machine identities within the cloud workspace to minimize the blast radius of inevitable compromises.

Nate Abbott
6
m read
Read post
Podcast

Rethinking "Assume Breach": A Pragmatic Approach to Zero Trust in 2026

To make Zero Trust real in 2026, security must extend the "assume breach" mindset beyond the login screen to data at rest and machine identities within the cloud workspace to minimize the blast radius of inevitable compromises.

6
m listen
Listen to episode
Video

Rethinking "Assume Breach": A Pragmatic Approach to Zero Trust in 2026

To make Zero Trust real in 2026, security must extend the "assume breach" mindset beyond the login screen to data at rest and machine identities within the cloud workspace to minimize the blast radius of inevitable compromises.

6
m watch
Watch video
Downloads

Rethinking "Assume Breach": A Pragmatic Approach to Zero Trust in 2026

To make Zero Trust real in 2026, security must extend the "assume breach" mindset beyond the login screen to data at rest and machine identities within the cloud workspace to minimize the blast radius of inevitable compromises.

6
m listen
Watch video
Webinar

Rethinking "Assume Breach": A Pragmatic Approach to Zero Trust in 2026

To make Zero Trust real in 2026, security must extend the "assume breach" mindset beyond the login screen to data at rest and machine identities within the cloud workspace to minimize the blast radius of inevitable compromises.

6
m listen
Listen episode
blog post

The Quiet Phish: Stopping Calendar Invitation Attacks

Learn how to mitigate the risk posted by calendar invitation attacks against Google Workspace and Microsoft 365 accounts.

Rajan Kapoor, VP, Security
5
m read
Read post
Podcast

The Quiet Phish: Stopping Calendar Invitation Attacks

Learn how to mitigate the risk posted by calendar invitation attacks against Google Workspace and Microsoft 365 accounts.

5
m listen
Listen to episode
Video

The Quiet Phish: Stopping Calendar Invitation Attacks

Learn how to mitigate the risk posted by calendar invitation attacks against Google Workspace and Microsoft 365 accounts.

5
m watch
Watch video
Downloads

The Quiet Phish: Stopping Calendar Invitation Attacks

Learn how to mitigate the risk posted by calendar invitation attacks against Google Workspace and Microsoft 365 accounts.

5
m listen
Watch video
Webinar

The Quiet Phish: Stopping Calendar Invitation Attacks

Learn how to mitigate the risk posted by calendar invitation attacks against Google Workspace and Microsoft 365 accounts.

5
m listen
Listen episode
blog post

A Time to Build, a Time to Buy - How to Make a Choice

Building security tools in-house is seductive but often leads to costly, resource-draining tech debt, making buying a customizable platform the smarter choice unless the problem involves unique trust issues, an unserved market, or highly specialized environment logic.

Rajan Kapoor, VP, Security
5
m read
Read post
Podcast

A Time to Build, a Time to Buy - How to Make a Choice

Building security tools in-house is seductive but often leads to costly, resource-draining tech debt, making buying a customizable platform the smarter choice unless the problem involves unique trust issues, an unserved market, or highly specialized environment logic.

5
m listen
Listen to episode
Video

A Time to Build, a Time to Buy - How to Make a Choice

Building security tools in-house is seductive but often leads to costly, resource-draining tech debt, making buying a customizable platform the smarter choice unless the problem involves unique trust issues, an unserved market, or highly specialized environment logic.

5
m watch
Watch video
Downloads

A Time to Build, a Time to Buy - How to Make a Choice

Building security tools in-house is seductive but often leads to costly, resource-draining tech debt, making buying a customizable platform the smarter choice unless the problem involves unique trust issues, an unserved market, or highly specialized environment logic.

5
m listen
Watch video
Webinar

A Time to Build, a Time to Buy - How to Make a Choice

Building security tools in-house is seductive but often leads to costly, resource-draining tech debt, making buying a customizable platform the smarter choice unless the problem involves unique trust issues, an unserved market, or highly specialized environment logic.

5
m listen
Listen episode
Privacy Preference Center

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

New