This post is intended to be the first in a series about empowering product, UX, and engineering teams with AI. We’re going to focus on how we’ve approached and scaled the use of AI in the context of writing code.

AI has fundamentally transformed how we build software at HubSpot. Over the past two years, we've gone from cautious experimentation to achieving near universal adoption of AI coding tools across our engineering organization. 

This transformation didn't happen overnight. It required strategic investment, organizational commitment, and a willingness to learn. As we've shared our experience with other engineering leaders, we've discovered that many teams are facing similar challenges in scaling AI adoption beyond POCs and early adopters. The conversations we've had with external teams convinced us that our lessons learned could help others navigate this journey more effectively.

Adoption of AI Coding

Adoption of AI coding assistants, % of members in the Engineering organization [sanitized data]

In the beginning there was code completion

We began experimenting with GitHub Copilot in the Summer of 2023. Our founders Dharmesh and Brian provided us with the push we needed to get started. Dharmesh had recently used GitHub Copilot to build ChatSpot and had a good experience with it, so he and Brian pushed us to evaluate it and connected us with other leaders in the industry who were seeing success.

Our proof of concept (POC) successfully validated the tool's potential, and several factors contributed to our success:

  • Executive buy-in made everything else easier. Support from Dharmesh and Brian accelerated our pilot process significantly. This helped our legal, security, and engineering team have the same goal and urgency for making this happen.
  • We ran a pilot that was sufficiently large: Our strategy was to include entire teams so they could adopt and learn together that had different experience levels, different missions, and worked in different domains. We gave teams over two months to try it..
  • We put energy into enablement. We had setup/training sessions and created a channel where people could ask questions and share what is and is not working.
  • We measured everything. We applied our existing engineering velocity measurement methods to the pilot. This helped us check our biases. We were skeptical at the outset but seeing measured impact chipped away at our skepticism.

The initial results were encouraging: positive qualitative feedback and measurable but modest productivity improvements, across engineers of different tenure and seniority. Our initial gains fell short of some extraordinary claims we were hearing in the market, but they were still significant given Copilot's cost structure of $19/mo/business user at the time. Even modest time savings justified the investment.

With a group of committed stakeholders seeing the early value and the potential with the tool, we were willing to be patient and continue our investment. We believed the technology would only improve over time, so we rolled it out with guardrails. As we scaled adoption and people gained more experience with it, we saw increasingly meaningful productivity gains.

Leveraging the Power of Central Teams

At HubSpot, we've long believed in the leverage that central teams create. Our platform teams build infrastructure, tools, and guardrails that enable small autonomous product teams to move fast while maintaining quality and consistency.

When generative AI emerged, we initially relied on teams adjacent to these areas (specifically teams that managed our GitHub setup) to drive adoption. But as demand exploded, the backlog grew exponentially. We realized it was time to create a dedicated team. 

We created a Developer Experience AI team in October 2024, with an initial focus on:

  • Driving adoption of coding tools: Once we realized the impact these tools were having, we wanted the entire org on board as soon as possible
  • Increasing the impact of AI tools: HubSpot has a very opinionated stack and we wanted our generated code to reflect these opinions as much as possible. This started very simply with the sharing of Cursor rules files, but quickly evolved to more complex tools that gave agents deep context about our architecture, libraries, and best practices. (More to come on this in the future)
  • Advocacy: We wanted to build a community around AI, by collecting and disseminating what was working for people. We created an open forum for people to post about AI and seeded content to drive engagement. We saw a vibrant community slowly spring up as adoption grew.
  • Adapting procurement for speed: We knew we wanted to try every tool that came out, but our purchasing processes were designed for longer term negotiated agreements and we couldn't always count on a push from our founders to get things moving. We wanted month-to-month contracts and to get started ASAP.
  • Building evaluation capabilities: We didn't want to rely solely on qualitative feedback, so we came up with ways to run pilots and compare tools on merit. We also experienced first-hand how empirical data could combat preconceptions and skepticism.

Central infrastructure teams create leverage for product teams in every facet of their daily work. AI is no different. We started very small with just two people who had infrastructure experience and were already highly engaged with AI. The team grew over time as we branched out into more advanced use cases, many of which we'll cover in this series. But creating the team and focusing these engineers paved the way for our future success without a massive investment to get started.

Tipping the Scale

As engineers adopted the tools and we collected more data, our conviction grew that these tools would have had a positive impact on our engineering team. Initially, we maintained conservative usage rules due to limited experience and cost concerns. Users had to request a license and agree to follow strict guardrails.

We pulled metrics on code review burden, cycle time, velocity comparisons before and after adoption, and production incident rates.

Incident Rates across Teams

Impact of AI adoption on incidents, team level data [sanitized]

The data consistently showed the same thing: AI adoption wasn't creating the problems we were initially worried about. The scatter plot above shows one example, showing that there was no correlation between AI adoption and production incidents.

In May 2024, we ditched the restrictions. Then we proactively gave everyone a seat, making it as easy as possible to get started. Adoption shot above 50% overnight.

Reaching the Late Majority

Adoption slowed again as it increased beyond 60%. The latter stages of adoption are where you face skeptics, start to better understand the limitations of the current tools, and see higher levels of change/risk aversion, so we had to change our approach:

  • Peer validation: Whenever we heard someone did something interesting with AI, we asked them to record a video and share it. We also began recording weekly videos ourselves showing new features and real usage. 
  • Quantitative proof: We shared high-level data showing that most people were already using these tools successfully and safely. We deliberately kept the numbers broad rather than getting into precise details. While data was important for making decisions, we wanted people to focus on the clear trend of improvement rather than getting stuck debating exact figures.
  • Provide better tools: We ran POCs for multiple coding assistants to give engineers more options, recognizing that different tools work better for different workflows and preferences.
  • Curated experience: We transparently set up a local MCP server on every machine with default rules and configuration optimized for our development environment. This gave every engineer an experience tailored to our specific stack and best practices right out of the box. We continue to revise and improve this setup over time based on what we learn about effective usage patterns.
  • AI fluency a baseline expectation: Once we hit 90% adoption, we made AI fluency a baseline expectation for engineers by adding it to job descriptions and hiring expectations. By this point, it was easy to see that AI fluency wasn’t just the right thing for HubSpot, but for engineers it was a necessary investment as they continue to grow in their careers through this transformation. This helped us clearly commit internally and externally to the investment.

Adoption was the beginning and opened the door to everything that followed: taking advantage of coding agents, creating Sidekick (our AI assistant that answers platform questions, creates issues, implements changes, and reviews PRs), developing a way to rapidly prototype UIs with our design system, and building infrastructure that led to 400+ tools that our agents can leverage across our internal, OpenAI, and Anthropic MCP servers.


Next: How we transitioned to agentic coding

Recommended Articles

Join our subscribers

Sign up here and we'll keep you updated on the latest in product, UX, and engineering from HubSpot.

Subscribe to the newsletter