LinearB’s cover photo
LinearB

LinearB

Software Development

Los Angeles , California 11,887 followers

The Engineering Productivity Platform.

About us

LinearB is an engineering productivity platform that helps enterprises improve their developer experience, efficiency, and effectiveness. Unlike other solutions, LinearB leverages AI agents and programmable workflows to help your developers safely and quickly build, version, and deploy changes. With full visibility and control over your team’s operations, you can now define exactly how your team’s code is brought to production. Learn more at www.linearb.io

Website
https://www.linearb.io
Industry
Software Development
Company size
51-200 employees
Headquarters
Los Angeles , California
Type
Privately Held
Founded
2018

Locations

Employees at LinearB

Updates

  • View organization page for LinearB

    11,887 followers

    Most engineering teams are flying blind on their AI investments. After working with hundreds of engineering teams scaling AI adoption, we've seen the same pattern: excitement about AI tools, followed by confusion about whether they're actually working. Executives want ROI. Boards track AI productivity as a KPI. But most teams are stuck measuring the wrong things: seat counts, hours saved, lines of code generated. That's why we built the LinearB AI Measurement Framework. It's built on two core principles: meaningful adoption and measurable impact. Here's what elite engineering teams measure: Adoption that matters: - Daily active users by team (not just seat counts) - Code acceptance rates from AI suggestions - Developer confidence in AI recommendations Impact that counts: - Time to merge: Are PRs getting approved faster? - Merge frequency: Are teams shipping more value per sprint? - Rework rate: Is AI-generated code creating technical debt? The key is attribution—distinguishing between human-authored, AI-assisted, and fully AI-generated code throughout your delivery pipeline. This isn't about proving AI works in theory. It's about proving your AI investment is delivering measurable business value. We've tested this framework with some of the world's best engineering teams. Now we're open-sourcing our approach to help the entire community measure AI impact effectively. Get the complete AI Measurement Framework, tools, metrics, and best practices included: https://lnkd.in/gfCFceei

    • No alternative text description for this image
  • View organization page for LinearB

    11,887 followers

    Letting go is the hardest upgrade in engineering leadership. The “I’ll do it myself” reflex is fast today and a trap tomorrow. This week, Minh Nguyen, VP of Engineering at Transcend, shares the shift from high-performing IC to executive at a high-growth startup. Her north star: maintain high-fidelity information as the org scales so leaders stay connected to what builders are experiencing on the front line. At LinearB, this truth resonates with us. We build for this kind of clarity across code review and delivery, so teams review less, catch more, and ship smarter. Also in The Download this week: - 🧊 Figma's IPO debut signals renewed appetite for profitable software with real adoption - 🕵️ Perplexity vs Cloudflare, a reminder that data governance and crawler controls need teeth - 🗣️ Obsidian CEO Steph Ango on “ramblings” channels to strengthen remote team cohesion - 😩 Kelly Vaughn on AI adding meetings and busywork when adoption is unmanaged - 📚 Jennifer Riggins on why code creation is not the bottleneck

  • LinearB reposted this

    View profile for Alex L.

    Engineering Manager at Yum! Brands

    The best part of working in engineering isn’t just shipping features: it’s figuring out how to work better together without drowning in process. At Yum! Brands' Pizza Hut Digital Ventures, we’ve made some big changes to how we plan and review work. More alignment between teams, less back-and-forth, and a lot less “Hey, can you review my PR today?” The @LinearB team pulled together a case study on what’s been working for us. Always a bit weird seeing your own work turned into a “story,” but it’s a good reminder that the small process wins add up. Full case study here: https://lnkd.in/e7Wxqkij Always curious how other teams are tackling review bottlenecks. What’s been working for you?

  • The best way to measure AI productivity involves asking your developers. Measuring people feels harder than measuring technology. We want clean metrics, 50% faster, 60% more efficient, because quantitative measures feel more legitimate. But here's what we've learned: the most important signal for whether AI tools are working lies in your dashboards. It's in the answers to three simple questions: - Are you finding these tools useful? - Are you concerned about using them? - What's blocking you from being more productive with these tools? Regardless of what tool or process improvement you roll out, your first line of evaluation should always be: "Has this helped you?" Your developers are the ones using these AI tools daily. They know when something genuinely improves their workflow versus when it creates more friction. They can tell you if that "productivity boost" is real or just moving complexity around. Before you chase the perfect productivity metric, start with the simplest one: developer satisfaction. When your team says the tools are helping them do better work, that's when you know your AI investment is paying off. Watch the full discussion:https://lnkd.in/gu4DC6W7

  • Meta built an AI-powered bug hunter, and it's changing how teams test code. Their new system, ACH, uses LLMs to simulate realistic bug scenarios based on plain-language prompts. The goal is to catch issues like privacy leaks before they go live. Here’s what ACH does differently: • Engineers describe the bugs they care about most • AI mutates the code to simulate those bugs • Custom test cases are generated and validated automatically The results are: • 73% of AI-generated tests were accepted by engineers • Over 500 privacy-targeted tests created • Applied across Android apps like Facebook, Messenger, and WhatsApp This focuses on better tests rather than more tests. ACH focuses on context, safety, and real-world risk rather than code coverage. The takeaway is that LLMs can drive quality, governance, and security when deeply integrated into your SDLC, not only write code. Want to see how companies like Meta and Google are building AI-first engineering workflows? 📊 Get the full 2025 AI Data Report → https://lnkd.in/gfdKtx9B

    • No alternative text description for this image
  • What if your APIs, not your app, become your company’s most valuable asset? In our latest episode of Dev Interrupted, Matt DeBergalis (then-CTO, now-CEO of Apollo GraphQL) explains why you won’t be able to rely on your app as your calling card much longer. Because after all, you can only ship so much net new in a year. AI agents aren’t just calling APIs, they’re reshaping how software gets designed and deployed. They're recombining tools and endpoints in ways that, until now, were unmaintainable or unscalable. The future isn’t about hand-written integrations or perfectly scoped microservices. It’s about semantic orchestration at scale with your API as the map. Matt calls it the “agent experience.” We call it a wake-up call for engineering leaders. Because when an LLM becomes your top API consumer, that precision, structure, and meaning matter more than ever. The old rules don’t apply and systems not built for this reality won’t keep up. And special thanks to Andrew Boyagi, newly-promoted Customer CTO at Atlassian (congrats to you too!), for being part of our news segment today. Be sure to check out the deep dive into Atlassian's latest developer survey on AI adoption.

  • LinearB reposted this

    View profile for Ori Keren

    Co-founder & CEO at LinearB (linearb.io)

    During my time as an engineering leader, one of my frustrations was seeing talented teams slowed down by DevEx bottlenecks. That's why SurveyMonkey's transformation story hits so close to home. They went from manual EC2 rebuilds to a fully automated, Kubernetes-based platform with enterprise grade compliance. The catalyst was LinearB workflow automation which gave them control over their development pipeline. Here's what made the difference: - Automated PR routing eliminated additional developer approval delays - Intelligent auto labeling created audit trails for SOC2 compliance - Custom workflows standardized across all teams The transformation was remarkable: faster cycle times, improved developer experience, and an infrastructure team that moved from reactive to strategic. As their team said: "We don't want to build custom tools unless we absolutely have to. LinearB automation gave us everything we needed right out of the box." This is the evolution we're seeing across enterprise engineering teams, moving from reactive DevEx to proactive workflow. Get the full breakdown here: https://lnkd.in/d3XCpSiU

  • Visibility, automated oversight, and clear definitions are key to good governance.  Good governance practices are an enabler for AI adoption at scale. As AI agents begin to take action like merging PRs, triggering deployments, or responding to production errors, oversight becomes mission-critical. The probabilistic nature of LLMs makes explainability and auditability difficult but essential. Yet, many organizations approach governance as an afterthought or, worse, as a barrier to innovation. This approach is backwards. Effective governance should be an enabler that creates enough trust to adopt AI at scale. For AI to drive meaningful impact in enterprise environments, people must have confidence that its outputs are accurate, reliable, and grounded in beneficial outcomes, not systems riddled with risks or unintended consequences. Visibility into AI deployment and its impact on quality and efficiency, automated oversight for security, compliance, and best practices, and clear definitions of developers’ ownership and responsibilities in the code review process are all vital components of good governance. AI workflows will require implementing human-in-the-loop mechanisms, building clear audit trails, and creating explainability features that help users understand AI reasoning. It means designing governance frameworks that show developers the most straightforward path forward rather than shifting compliance responsibility onto them. As we move toward a more autonomous view of the world, these governance frameworks become essential enablers that allow organizations to scale AI adoption while maintaining accountability, trust, and safety. Download the guide: The 6 trends shaping the future of AI-driven development https://lnkd.in/gfthQJYE

    • No alternative text description for this image

Affiliated pages

Similar pages

Browse jobs

Funding

LinearB 4 total rounds

Last Round

Series B

US$ 50.0M

See more info on crunchbase