Supply chain attacks rank among the stealthiest and most devastating cybersecurity threats. One scenario targets trusted software such as Visual Studio to deliver malware. By compromising a software’s supply chain, attackers embed malicious code that can trigger unauthorized actions or access sensitive system areas. Recently, multiple 3rd parties reported to Microsoft that they found two malicious ‘extensions’ in their Visual Studio Marketplace, which Microsoft removed. There are many variations on how developers can unwittingly run malicious code in their developer environment that harms the enterprise. Such attacks are designed to fool developers. Worse, such attacks tend to be difficult for AV and EDR to detect in time, if at all. This blog offers some conceptual insights into how enterprises can lockdown such environments with application control and containment. Finally, it will explain how AppGuard uniquely strikes a balance between security and developer flexibility.
The Threat: Supply Chain Attacks on Trusted Applications
In a supply chain attack, adversaries infiltrate a software vendor or an online repository to plant malicious files, components, or source code snippets.
There is speculation that AI tools might be temporarily manipulated by having a user direct the tool to analyze files with malicious instructions or targets (domains, IP addresses, URLs, APIs, etc.) that tacitly get treated as trustworthy. Later in that same session, those things remain in the context of the tool. Should the user then ask the tool to craft some source code, those malicious elements could possibly be included. This is speculative, to illustrate what might come.
Here and now, developer users must be careful about the files they download and the source code snippets they copy/paste. Worse, when such intrusions occur, detection-based tools such as AV/EDR struggle. Normal patterns on developer PCs are far more varied than those of others. Developer tools enable vastly different and unusual behavior patterns. In sum, the difficulty of telling bad from good is more difficult to detect.
Without a robust moat around each developer’s environment, terrible harm could affect the rest of the enterprise. Do you place the moat around the developer’s workstation? What about the developer environment WITHIN the workstation? Ideally, one does both.
Balancing Security with Developer Needs
Application control and containment can’t be rigid dogma—security’s vital, but developers need room to breathe. Too strict, and productivity tanks; too loose, and risks spike. The IT, cybersecurity, and developer groups must reach an understanding in the form of a security policy that reduces potential attack surface.
Restrict What can Run and What the Running Can Do
A tool such as Visual Studio has many different extensions and settings. And, it also has many dependencies on utilities typically found on the developer’s PC. For example, for developers that have no need for PowerShell, one can prohibit PowerShell entirely, which would very substantially reduce the attack surface. Or, maybe just prohibit PowerShell as a child process of Visual Studio (ie, devenv.exe). However, other children must be allowed, such as compilers, debuggers, and other tools. All this restricts what can run.
The policy must also restrict what the running can do, such as what folders can be written into and what folders can have script files load into script engines. Do developers need their tools and payloads to be able to write into the local machine registry hive (HKLM)? Are they making use of unsigned DLLs? Are they writing code that involves one computing process writing into the memory of another computing process? Can they work productively logged in as a standard user (login as admin user only when making system changes)?
In short, one needs to determine what is allowed to run and what the developers’ environment needs to be allowed to do while running. The more this is spelled out, the less likely something goes terribly wrong.
Comparing Allow Lists and Deny Lists
When managing process launches, two classic approaches pop up: allow lists and deny lists. Each has its pros and cons, especially for developers.
- Allow Lists: Only pre-approved processes can run—tight security that blocks unknowns outright. But for developers, this can feel like a straitjacket, halting custom tools until they’re vetted. Keeping the list current takes work and can stall projects.
- Deny Lists: These block known bad actors but let everything else slide. They’re simpler to manage but weaker—new threats like zero-day malware slip through unnoticed.
Restricting Parents Versus Children
There are two kinds of allow/deny lists. One is for the entire PC, which determines whether a process can run at all. Any process is a potential parent. An allow/deny list for an entire PC limits potential parent processes.
The other kind of list concerns restrictions on child processes of parent processes. In the case of Visual Studio, restrict what child processes can be spawned by devenv.exe.
Some application control tools can be quite family-centric where policies specify what processes can be parents to what children. In other words, only this list of processes can launch PowerShell.
In Application Control/Containment, Inheritance Separates Easy Tools from Difficult Tools
Administering application control/containment tools can be difficult. The more policy rules that are needed, and the more often rules must be revised, the more difficult the administration. Policy inheritance can simplify this dramatically, yielding greater risk reduction with far fewer policy rules. The familial metaphor aptly provides perspective in terms of inheritance.
Back to our example of Visual Studio, we look back to its parent process, devenv.exe. Policy inheritance applies the rules for the parent to its children, grandchildren, and so on. Fewer rules makes for easier setup and maintenance over time. Those looking to use application control/containment to mitigate risks from developer environments ought to make policy inheritance a top priority in their selection.
Risk Mitigation, Another Angle: Prohibiting Launches vs. Containing What Runs
While containment is effective, prohibiting launches edges it out for pure risk reduction. Thanks to quirks in how operating systems are built, application containment on Windows might never be perfect. Microsoft would have to completely redesign Windows.
Now, let’s return to striking a balance between productivity and security. When practical, it is better to prohibit the use of a potentially dangerous application than to contain it. Realistically, your developer might need some of these applications. At the very least, one should apply containment to them, which restricts what they can and cannot do.
- Prohibiting Launches: Stopping a process before it starts wipes out its threat potential. It’s a brick wall—malicious code never gets a foothold to exploit anything.
- Containment: Containment lets a process run but ties its hands. It’s great—really great—but not flawless. Modern OS complexity means total isolation’s tricky; a contained process might still nudge the kernel or lean on shared system bits in unexpected ways. Improving beyond ‘great’ would need a ground-up OS overhaul, which isn’t happening soon. Still, containment slashes risk big-time.
For top-tier security, blocking launches wins. But in dynamic setups like development, containment’s practical edge shines.
Isolation Rules Lockdown what Launch and Containment Might Not
The cybersecurity people might want assurance that some files or registry keys cannot be changed by any computing process. Developer environments are notoriously varied and changing. Anticipating every computing process that must be allowed to run and run with restrictions can leave gaps. Worse, something completely unanticipated can occur. To contain an application (ie, restrict what it can do), one must anticipate that application running. Policy inheritance considerably reduces this risk. But, one cannot assume every possible computing process that ought to be contained has been contained.
Enter isolation rules. These are the opposite of containment rules, which focus on the actor process performing actions on various target objects (files, folders, registry keys, the memory of other computing processes, etc.). With isolation rules, one focuses on the target objects, protecting them from unspecified actor processes.
Many malware techniques must add or modify specific objects. Isolation rules lock these objects, which neutralizes these malware techniques, which stops malware using such techniques.
Few application controls enforce containment at all or practically. Fewer enforce isolation policies.
AppGuard’s Custom Containment Policy for Visual Studio
AppGuard can enforce policies tailored for Visual Studio, zeroing in on its core parent executable (devenv.exe) and every process it spawns. This policy hinges on two pillars:
- Launch: Restrict what can and cannot run, and specify folders from which developers’ code can launch.
- Containment: Boxes in what Visual Studio and its spawned processes can read/write in terms of files, folders, registry keys, and the memory of other computing processes.
- Isolation: Little customization here is required. However, one might consider isolating the develop environment’s settings controls, which can be a file and/or registry hive branch, so they cannot be altered.
For developers who need to run custom tools unknown to security teams, AppGuard blends strict security with practical flexibility through inheritance-based containment. AppGuard’s inheritance-based containment lets developers fire up custom tools while keeping those tools fenced off from sensitive stuff. It’s security that boosts efficiency, not bottlenecks it. AppGuard’s inheritance-based containment splits the difference: it lets unknown processes run but keeps them on a short leash, blending flexibility with solid protection.
Conclusion
AppGuard can enforce launch, containment, and isolation policies with inheritance flexibilities that allow developers to do what they must but prevent their environments from doing what they must not. The combination of these three fundamental controls with inherited policies makes AppGuard simpler to customize and simpler to administer over time. By adding AppGuard, the enterprise no longer depends solely on its detection tools to distinguish bad from good among practically infinite possibilities. Instead, the mitigation of supply chain developer environment risks gets both easier and more effective. Other application control and/or application containment vendors might say they can do this too. But, they won’t tell you that their solution requires far more policy rules that must be constantly re-tuned to avoid disrupting developers.