In 2022, we started a genre-defining project that we hoped would transform how people maintain their code quality. We figured we should facilitate running custom and community analyzers/linters on DeepSource alongside our core analyzers.

Macro was the internal codename for custom analyzers. The initial goal was to allow users to write their own analyzers, and we’d provide the tools to author and host them, similar to the way npm hosts community node packages ⎯ an NPM for static analyzers, so to speak. The first pass at this was just a place on DeepSource to host their analyzers. The user would define a namespace in their team, and test and upload their analyzers using the DeepSOurce CLI.

Here’s a walkthrough of how Macros (custom analyzers) were supposed to work on the DeepSource dashboard 👇

https://youtu.be/IZqw2ovybaA

Before releasing Macros, we did a small trade-off analysis -

Because of the cons, we decided to temporarily shelve the project until we find a set of developers who are exceptionally good with languages to craft custom analyzers. Most of our enterprise customers were happy to use open-source linters, and had built their workflows around them. We figured it’d be a more fruitful exercise to leverage the underlying infrastructure of macros to first release Community Analyzers. We could always circle back and release the ability to craft and deploy custom analyzers on Deepsource later.

Community Analyzers


Community Analyzers are open-source third-party static analyzers that are executed as part of your existing CI pipeline and the results are reported to DeepSource using the OASIS standard SARIF (Static Analysis Results Interchange Format) format. Unlike DeepSource’s core analyzers, community analyzers do not run on DeepSource's infrastructure. This approach ensures that you can still utilize DeepSource’s powerful analysis features and broadens the horizon of technologies and languages you can now analyze using DeepSource. You are no longer limited to the analyzers we provide natively.

Earlier, you would have had to just rely on the set of core analyzers that were designed and developed by DeepSource. While these analyzers were way more powerful than open-source linters, it required our users to often create their quality gates’ configuration from scratch ⎯ as we used different issue codes, and offered our own infrastructure to set quality gates. Lack of interoperability proved to be a significant hindrance.

Community Analyzers went live earlier this year, and we followed a model similar to what Raycast follows for maintaining a directory of open-source extensions ⎯ using an open-source repository to store the code for community analyzers, which anyone can contribute to.

<aside> 🍧 You can read more about Raycast extensions and its API here ↗️

</aside>

Leveraging Community Analyzers is a straightforward process, mirroring the usage of core analyzers, with just one extra step. All supported community analyzers can be found in the ****Analyzer Directory. After enabling the analyzer in the repository's .deepsource.toml configuration file, you can use one of the CI configuration snippets which are pre-configured to execute the analyzer and report the results in SARIF format back to DeepSource.

For the initial release, we prioritized adding linters for technologies that were not supported by core analyzers. For instance, we released support for Kube linter as we had no kubernetes analyzer, but commonly available linters like eslint for javascript were scheduled for a later release.

Designing for Community Analyzers