While both Bazel and Buck2 use Starlark as their extension language, the implementations differ significantly in architecture, capabilities, and philosophy. This comprehensive guide provides everything developers need to understand these differences when choosing between or migrating between these build systems.
Caution
Despite both using “Starlark” and .bzl files, Bazel and Buck2 are not compatible . Rules written for one system will not work in the other without significant modification. The underlying execution models, provider systems, and APIs are fundamentally different.
Aspect Bazel Buck2 Core Language Java (with some C++) Rust Starlark Interpreter starlark-java (native) starlark-rust Rule Location Mix of Java built-ins + Starlark 100% Starlark (prelude) Type System Basic annotations Advanced (record, enum, union) Extension Mechanism Aspects BXL (Buck Extension Language) Build Graph Phased (loading, analysis, execution) Single incremental graph (DICE) File Names BUILD, .bzlBUCK, .bzl, .bxlRemote Execution Native support (RE first design was added later) RE-first design from inception Open Source 2015 2023 Primary Users Google, community Meta, community
The most fundamental difference between Bazel and Buck2 lies in their Starlark interpreters. This affects performance, available language features, and extension capabilities.
Caution
Important clarification : Bazel uses its own Java implementation (net.starlark.java), NOT starlark-go. The starlark-go project is a separate Go implementation used by other tools (Tilt, Copybara, etc.), but not by Bazel itself . This is a common misconception.
Bazel uses starlark-java , a native Java implementation within the Bazel codebase:
// From net.starlark.java - Bazel's native Starlark implementation
// Located in src/main/java/net/starlark/java/
package net.starlark.java.eval;
Key characteristics:
JVM-based execution : Runs on the Java Virtual Machine alongside Bazel
JVM garbage collection : Uses Java’s garbage collector
Integer representation : Uses Java primitives with BigInteger fallback
Thread model : Integrates with Bazel’s Skyframe parallel evaluation
Freezing : All mutable values created during module initialization are frozen upon completion
Not standalone : Lives inside Bazel’s monorepo, not published as a separate library
Buck2 uses starlark-rust , a Rust implementation with significant extensions:
Key characteristics:
No garbage collection pauses : Rust’s ownership model eliminates GC overhead
Send/Sync semantics : Frozen values are Send/Sync, non-frozen values are not
Rich type extensions : record, enum, type annotations with runtime checking
Heap allocation : Garbage collected values allocated on a dedicated heap
DAP debugging support : Debug Adapter Protocol for interactive debugging
// From starlark-rust - easy interoperability between Rust types and Starlark
// Rust-friendly types where frozen values are Send/Sync
Aspect starlark-java (Bazel) starlark-rust (Buck2) Evaluation JVM-optimized Bytecode + optimizations Memory JVM GC managed Manual (no GC pauses) Integer perf Java primitives + BigInteger Optimized Parallelism Skyframe integration Rust Send/Sync semantics Debugging Limited DAP support
Feature starlark-java (Bazel) starlark-rust (Buck2) Recursion Disabled by default Supported Top-level for Disabled by default Supported Type annotations Limited checking Runtime enforcement record typeNot available Built-in enum typeNot available Built-in Union types Not available Built-in (A | B) struct typeVia struct() builtin Via record
WORKSPACE # Root workspace definition (legacy)
WORKSPACE.bazel # Alternative name
MODULE.bazel # Bzlmod dependency management (modern)
MODULE.bazel.lock # Lock file for dependencies
BUILD # Target definitions
BUILD.bazel # Alternative name for BUILD
.bazelrc # Build configuration
.bazelversion # Bazel version pinning
rules.bzl # Rule definitions
defs.bzl # Macro definitions
providers.bzl # Provider definitions
.buckconfig # Root configuration
.buckconfig.d/ # Modular config directory
BUCK # Target definitions
TARGETS # Alternative name for BUCK (Buck1 compat)
rules.bzl # Rule definitions
defs.bzl # Macro definitions
analysis.bxl # BXL scripts for introspection
build.bxl # BXL scripts for custom builds
prelude/ # Standard rule library (git submodule)
native.bzl # Native rule exports
Note
Buck2’s prelude is typically included as a git submodule pointing to github.com/facebook/buck2-prelude . It serves as a “standard library” providing rules for C++, Rust, Python, Go, Java, and more.
Buck2’s starlark-rust implementation includes significant type system extensions not available in Bazel’s starlark-java.
# Bazel: Type hints supported but loosely enforced
""" Compile a source file.
src: Source file path (string expected)
out: Output file path (string expected)
# No runtime type checking
def typed_compile (src: str , out: str ) -> None :
""" Type annotations allowed but provide limited checking. """
# Annotations are primarily documentation
# Buck2: Type annotations are enforced at runtime
def compile (src: str , out: Artifact ) -> DefaultInfo :
""" Compile a source file.
Type errors are treated as actual errors at runtime.
# Generic types are fully supported
def process (items: list[str]) -> dict [ str , int ]:
""" Process items with full generic type support. """
transform: typing.Callable[[str], str]
return [ transform (d) for d in data]
Records provide structured data types with compile-time field definitions. This is extensively used in Buck2’s prelude.
Real example from Buck2’s prelude (/refs/buck2/prelude/artifact_tset.bzl):
# Buck2: Define structured types with records
# Describes artifacts required for debugging Swift code
artifacts = field ( list [ Artifact ]),
tags = field ( list [ ArtifactInfoTag ]),
# Records can have optional fields with defaults
_tset = field ([ _ArtifactTSet , None ], None ),
# Usage - creating instances
tags = [ ArtifactInfoTag ( "swift_debug_info" )],
print (info.label) # Label
print (info.artifacts) # list[Artifact]
Real example from Buck2’s cxx rules (/refs/buck2/prelude/cxx/cxx_types.bzl):
# Complex record with many fields and defaults
CxxRuleSubTargetParams = record (
argsfiles = field ( bool , True ),
compilation_database = field ( bool , True ),
clang_remarks = field ( bool , True ),
clang_traces = field ( bool , True ),
headers = field ( bool , True ),
link_group_map = field ( bool , True ),
link_style_outputs = field ( bool , True ),
xcode_data = field ( bool , True ),
objects = field ( bool , True ),
bitcode_bundle = field ( bool , True ),
header_unit = field ( bool , True ),
# Record with callable fields
CxxRuleConstructorParams = record (
headers_layout = CxxHeadersLayout ,
is_test = field ( bool , False ),
extra_preprocessors = field ( list [ CPreprocessor ], []),
# Callable field with complex signature
output_style_sub_targets_and_providers_factory = field (
lambda _link_style, _context, _output: ({}, [])
error_handler = field ([typing. Callable , None ], None ),
Real examples from Buck2’s prelude (/refs/buck2/prelude/cxx/cxx_toolchain_types.bzl):
# Buck2: Define enumerations for type-safe constants
LinkerType = enum ( "gnu" , "darwin" , "windows" , "wasm" )
ShlibInterfacesMode = enum (
"stub_from_object_files" ,
"stub_from_linker_invocation" ,
"makefile" , # gcc -MD -MF depfile
"show_includes" , # cl.exe /showIncludes
"show_headers" , # clang/gcc -H
"none" , # No dep tracking (e.g., ml64)
"always_enabled" , # x86_64, arm64
"supported" , # -fPIC changes output
"not_supported" , # Windows
def is_bitcode_format (format: CxxObjectFormat ) -> bool :
CxxObjectFormat ( "bitcode" ),
CxxObjectFormat ( "embedded-bitcode" )
print ( LinkerType . values ()) # ["gnu", "darwin", "windows", "wasm"]
print ( LinkerType ( "darwin" ).index) # 1
print ( len ( LinkerType )) # 4
# Buck2: Union types with | operator
def process (value: int | str | None ) -> str :
# In record field definitions
CxxToolchainInfo = provider (
"lipo" : provider_field ([ RunInfo , None ], default = None ),
"minimum_os_version" : provider_field ([ str , None ], default = None ),
"objc_compiler_info" : provider_field (
[ ObjcCompilerInfo , None ],
CGoBuildContext = record (
_cxx_toolchain = field ( Dependency | None , None ),
Feature Bazel (starlark-java) Buck2 (starlark-rust) Basic annotations Yes Yes Generic types (list[T]) Limited Full Union types (A | B) No Yes record typeNo Yes enum typeNo Yes field() with defaultsNo Yes Static type checking Limited buck2 starlark typecheckRuntime type enforcement Partial Full typing.CallableNo Yes provider_field typedNo Yes
Buck2’s prelude is a comprehensive standard library of rules, all written in Starlark. It’s available at github.com/facebook/buck2-prelude .
Note
The prelude is a separate git repository typically included as a git submodule in Buck2 projects. This allows atomic updates to all rules while keeping the prelude versionable independently from the Buck2 binary. The prelude contains 50+ directories covering languages (C++, Rust, Python, Go, Java, Kotlin, Haskell, OCaml, Erlang, and more) and platforms (Apple, Android, Windows, Unix).
Structure of the prelude:
prelude.bzl # Entry point, exports native
native.bzl # Native rule definitions (21KB)
rules.bzl # Rule registration
rules_impl.bzl # Rule implementations (26KB)
genrule.bzl # Generic rule implementation (20KB)
cxx.bzl # C++ rules (41KB)
cxx_library.bzl # Library implementation (102KB)
cxx_toolchain.bzl # Toolchain definition (23KB)
cxx_types.bzl # Type definitions (14KB)
rust_library.bzl # Rust library rules
resolve_deps.bxl # IDE integration
check.bxl # Type checking
python_library.bzl # Python rules
package_builder.bzl # Go package rules
cgo_builder.bzl # CGo integration
apple_library.bzl # Apple platform rules
swift/ # Swift compilation
rust.bzl # Rust toolchain
python.bzl # Python toolchain
Real prelude entry point (/refs/buck2/prelude/prelude.bzl):
# Copyright (c) Meta Platforms, Inc. and affiliates.
load ( "@prelude//:native.bzl" , _native = "native" )
"@prelude//utils:buckconfig.bzl" ,
_read_config = "read_config_with_logging" ,
_read_root_config = "read_root_config_with_logging" ,
log_buckconfigs = "LOG_BUCKCONFIGS" ,
__overridden_builtins__ = {
"read_config" : _read_config,
"read_root_config" : _read_root_config,
} if log_buckconfigs else {}
load_symbols (__overridden_builtins__)
# Public symbols become globals everywhere except bzl files in prelude
Bazel uses a distributed ecosystem of rule repositories:
# MODULE.bazel - Modern dependency management
bazel_dep (name = "rules_cc" , version = "0.0.9" )
bazel_dep (name = "rules_java" , version = "7.3.2" )
bazel_dep (name = "rules_python" , version = "0.31.0" )
bazel_dep (name = "rules_go" , version = "0.46.0" )
bazel_dep (name = "rules_rust" , version = "0.40.0" )
urls = [ "https://github.com/bazelbuild/rules_cc/..." ],
Key Bazel rule repositories:
Repository Purpose Maintained By rules_cc C/C++ compilation Bazel team rules_java Java compilation Bazel team rules_python Python packaging Community rules_go Go compilation Community rules_rust Rust compilation Community rules_proto Protocol buffers Bazel team rules_pkg Package creation Bazel team
Aspect Buck2 Prelude Bazel rules_* Location Single git submodule Multiple external repos Updates Atomic prelude updates Per-repo version management Consistency Guaranteed compatible May have version conflicts Customization Fork and modify Fork specific rules repo Toolchains Explicit target deps Toolchain resolution registry
Providers are the primary mechanism for passing information between rules in both systems.
Real example from Bazel (/refs/bazel/src/main/starlark/builtins_bzl/common/xcode/providers.bzl):
# Bazel: Provider with init function for complex initialization
def _xcode_version_info_init (
visionos_minimum_os_version ,
watchos_minimum_os_version ,
macos_minimum_os_version ,
include_xcode_execution_info ):
execution_requirements = {
"supports-xcode-requirements-set" : "" ,
if availability == "LOCAL" :
execution_requirements[ "no-remote" ] = ""
elif availability == "REMOTE" :
execution_requirements[ "no-local" ] = ""
# Methods as closures over init parameters
def _minimum_os_for_platform_type ( platform_type ):
if platform_type in (platform_type_struct.ios, platform_type_struct.catalyst):
return dotted_ios_minimum_os
elif platform_type == platform_type_struct.tvos:
return dotted_tvos_minimum_os
fail ( "Unhandled platform type: {} " . format (platform_type))
"xcode_version" : lambda : _xcode_version (xcode_version),
"minimum_os_for_platform_type" : _minimum_os_for_platform_type,
"sdk_version_for_platform" : _sdk_version_for_platform,
"availability" : lambda : availability. lower (),
"execution_info" : lambda : execution_requirements,
XcodeVersionInfo , _new_xcode_version_info = provider (
doc = """The set of Apple versions computed from command line options.""" ,
"xcode_version" : "Zero-argument function returning dotted_version" ,
"minimum_os_for_platform_type" : "Function taking platform_type element" ,
"sdk_version_for_platform" : "Function taking platform element" ,
"availability" : "Zero-argument function returning availability string" ,
"execution_info" : "Zero-argument function returning execution requirements" ,
init = _xcode_version_info_init,
Standard Bazel provider pattern:
# Bazel: Simple provider definition
doc = "Contains my rule information" ,
"output" : "The main output file" ,
"transitive_deps" : "Depset of transitive dependencies" ,
"metadata" : "Dict of string metadata" ,
output = ctx.actions. declare_file (ctx.label.name + ".out" )
executable = ctx.executable._tool,
arguments = [output.path] + [f.path for f in ctx.files.srcs],
# Collect transitive deps
transitive = [dep[ MyInfo ].transitive_deps for dep in ctx.attr.deps if MyInfo in dep]
DefaultInfo (files = depset ([output])),
transitive_deps = depset ([output], transitive = transitive),
metadata = { "rule_type" : "my_rule" },
implementation = _my_rule_impl,
"srcs" : attr. label_list (allow_files = True ),
"deps" : attr. label_list (providers = [ MyInfo ]),
default = "//tools:processor" ,
Real example from Buck2 (/refs/buck2/prelude/cxx/cxx_toolchain_types.bzl):
# Buck2: Provider with typed fields using provider_field
"archiver" : provider_field (typing. Any , default = None ),
"archiver_flags" : provider_field (typing. Any , default = None ),
"archiver_reads_inputs" : provider_field ( bool , default = True ),
"binary_extension" : provider_field (typing. Any , default = None ),
"generate_linker_maps" : provider_field (typing. Any , default = None ),
"link_binaries_locally" : provider_field (typing. Any , default = None ),
"link_libraries_locally" : provider_field (typing. Any , default = None ),
"link_style" : provider_field (typing. Any , default = None ),
"link_weight" : provider_field ( int , default = 1 ),
"linker" : provider_field (typing. Any , default = None ),
"linker_flags" : provider_field (typing. Any , default = None ),
"lto_mode" : provider_field (typing. Any , default = None ),
"object_file_extension" : provider_field (typing. Any , default = None ),
"shlib_interfaces" : provider_field ( ShlibInterfacesMode ),
"shared_library_name_format" : provider_field (typing. Any , default = None ),
"type" : LinkerType , # Enum type!
"is_pdb_generated" : provider_field (typing. Any , default = None ),
"sanitizer_runtime_enabled" : provider_field ( bool , default = False ),
"sanitizer_runtime_files" : provider_field ( list [ Artifact ], default = []),
# Multiple related providers for compiler info
CCompilerInfo = provider (fields = [
"supports_two_phase_compilation" ,
# Buck2: Library info provider with Label type
CxxLibraryInfo = provider (
target = provider_field ( Label ),
labels = provider_field ( list [ str ]),
Note
Why typing.Any? The prelude uses typing.Any extensively for provider fields because:
Forward references : Some types aren’t defined yet when the provider is declared
Complex types : Types like cmd_args or custom records may not have stable type names
Flexibility : Allows duck-typing while still providing default values
In practice, Buck2’s type system is pragmatic - strict types for records and simple fields, typing.Any for complex or forward-referenced types.
Buck2 rule returning providers (/refs/buck2/prelude/genrule.bzl):
def genrule_impl (ctx: AnalysisContext ) -> list [ Provider ]:
out_artifact = ctx.actions. declare_output (
has_content_based_path = content_based
# Build command with cmd_args
cmd = cmd_args (ctx.attrs.cmd, ignore_artifacts = _ignore_artifacts (ctx))
"GEN_DIR" : "GEN_DIR_DEPRECATED" ,
"OUT" : out_artifact. as_output (),
"SRCDIR" : cmd_args (srcs_artifact, format = "./ {} " ),
# Run action with category for tracking
cmd_args (script_args, hidden = [cmd, srcs_artifact, out_artifact. as_output ()]),
prefer_local = prefer_local,
weight = value_or (ctx.attrs.weight, 1 ),
allow_cache_upload = cacheable,
error_handler = genrule_error_handler,
providers = [ DefaultInfo (
default_outputs = default_outputs,
sub_targets = sub_targets,
other_outputs = other_outputs,
if getattr (ctx.attrs, "executable" , False ):
providers. append ( RunInfo (args = cmd_args (default_outputs)))
Concept Bazel Buck2 Context type ctxAnalysisContextDeclare output ctx.actions.declare_file()ctx.actions.declare_output()Attribute access ctx.attr.srcsctx.attrs.srcsFile list ctx.files.srcsctx.attrs.srcs (already artifacts)Command builder ctx.actions.args()cmd_args()Run action ctx.actions.run()ctx.actions.run() with categoryExec dependency cfg = "exec"attrs.exec_dep()Output marker N/A .as_output()Typed provider fields No provider_field(type, default)
# Bazel: Using depsets for efficient transitive data
# Direct files from this target
direct_files = ctx.files.srcs
# Collect transitive depsets from deps
# Create depset with ordering
transitive = transitive_files,
order = "postorder" , # topological, postorder, preorder
# AVOID: Converting to list is O(n), use sparingly
file_list = all_files. to_list ()
# Use depset in command args for lazy expansion
args = ctx.actions. args ()
return [ DefaultInfo (files = all_files)]
Real example from Buck2 (/refs/buck2/prelude/artifact_tset.bzl):
# Buck2: Transitive sets with projections
_ArtifactTSet = transitive_set (
# Transform values for command line
"artifacts" : _get_artifacts,
def _get_artifacts (entries: list[ArtifactInfo]) -> list [ Artifact ]:
return flatten ([entry.artifacts for entry in entries])
actions: AnalysisActions ,
label: Label | None = None ,
artifacts: list[Artifact] = [],
infos: list[ArtifactInfo] = [],
children: list[ArtifactTSet] = [],
tags: list[ArtifactInfoTag] = []) -> ArtifactTSet :
# Filter None children efficiently
children_tsets = [c._tset for c in children if c._tset != None ]
# Optimization: return singleton for empty case
if not artifacts and not infos and not children_tsets:
# Optimization: single child passthrough
if not artifacts and not infos and len (children_tsets) == 1 :
# Build list of all non-child values
values. append ( ArtifactInfo (label = label, artifacts = artifacts, tags = tags))
kwargs[ "children" ] = children_tsets
_tset = actions. tset ( _ArtifactTSet , ** kwargs),
actions: AnalysisActions ,
tsets: ArtifactTSet | list[ArtifactTSet] = []) -> list [ TransitiveSetArgsProjection ]:
""" Project artifacts for use in command lines. """
tset = make_artifact_tset (actions = actions, children = tsets)
# Efficient projection into cmd_args
return [tset._tset. project_as_args ( "artifacts" )]
Using transitive sets in commands:
# Buck2: Efficient use in command construction
def _compile_impl (ctx: AnalysisContext ) -> list [ Provider ]:
# Create transitive set with projections
include_tset = ctx.actions. tset (
value = ctx.attrs.includes,
children = [dep[ CxxInfo ].includes for dep in ctx.attrs.deps],
# Use projection directly in command (very efficient)
ctx.attrs._compiler[ RunInfo ],
# Project as flags without materializing full list
include_tset. project_as_args ( "as_flags" ),
"-o" , output. as_output (),
category = "cxx_compile" ,
Feature Bazel depset Buck2 transitive_set Deduplication Yes Yes Ordering control Yes (3 modes) Via projections Projections No Yes (args_projections, json_projections) Reductions No Yes (aggregations) Graph integration Memory optimization Wired into dep graph Lazy expansion Via args.add_all Via project_as_args
These are the primary extension mechanisms for each system, serving different but overlapping purposes.
Aspects augment the build graph with additional information and actions, propagating along dependency edges.
Real example from Bazel (/refs/bazel/tools/compliance/gather_packages.bzl):
# Bazel: Aspect for collecting license information across dep graph
TransitivePackageInfo = provider (
"""Transitive list of all SBOM relevant dependencies.""" ,
"top_level_target" : "Label: The top level target label" ,
"license_info" : "depset(LicenseInfo)" ,
"package_info" : "depset(PackageInfo)" ,
"packages" : "depset(label)" ,
"target_under_license" : "Label: Associated with licenses" ,
"traces" : "list(string) - diagnostic traces" ,
# Singleton for efficiency
NULL_INFO = TransitivePackageInfo (
def should_traverse ( ctx , attr ):
""" Check if the dependent attribute should be traversed. """
for filters in [aspect_filters, user_aspect_filters]:
always_ignored = filters. get ( "*" , [])
attr_matches = filters[k]
if (attr in attr_matches or
( "_*" in attr_matches and attr. startswith ( "_" )) or
def _gather_package_impl ( target , ctx ):
# Skip exec configuration targets
if "-exec" in ctx.bin_dir.path:
# Gather direct license attachments
if hasattr (ctx.rule.attr, "applicable_licenses" ):
for dep in ctx.rule.attr.applicable_licenses:
licenses. append (dep[ LicenseInfo ])
# Gather transitive info from dependencies
for name in dir (ctx.rule.attr):
if not should_traverse (ctx, name):
a = getattr (ctx.rule.attr, name)
if type (dep) != "Target" :
if TransitivePackageInfo in dep:
info = dep[ TransitivePackageInfo ]
trans_license_info. append (info.license_info)
return [ TransitivePackageInfo (
target_under_license = target.label,
license_info = depset (direct = licenses, transitive = trans_license_info),
packages = depset (direct = packages, transitive = trans_packages),
traces = traces[: 10 ], # Limit trace output
gather_package_info = aspect (
doc = """Collects License providers into TransitivePackageInfo.""" ,
implementation = _gather_package_impl,
attr_aspects = [ "*" ], # Propagate along all deps
"_trace" : attr. label (default = "@rules_license//rules:trace_target" ),
provides = [ TransitivePackageInfo ],
apply_to_generating_rules = True ,
# Using the aspect in a rule
doc = """Gather transitive package info and write as JSON.""" ,
implementation = _packages_used_impl,
aspects = [gather_package_info],
"out" : attr. output (mandatory = True ),
BXL scripts are standalone Starlark programs that can query, analyze, and build targets interactively.
Real example: Analysis BXL (/refs/buck2/tests/core/bxl/test_analysis_data/analysis.bxl):
# Buck2: BXL for analyzing target providers
load ( ":defs.bzl" , "FooInfo" )
def _providers_test_impl ( ctx ):
node = ctx. configured_targets ( "root//:provides_foo" )
# Analyze and access providers
providers = ctx. analysis (node). providers ()
ctx.output. print (providers[ FooInfo ])
# Can also analyze by label
providers = ctx. analysis (node.label). providers ()
ctx.output. print (providers[ FooInfo ])
providers_test = bxl_main (
impl = _providers_test_impl,
def _dependency_test_impl ( ctx ):
node = ctx. configured_targets ( "root//:stub" )
# Convert analysis result to dependency
dep = ctx. analysis (node). as_dependency ()
ctx.output. print ( type (dep))
ctx.output. print (dep.label)
dependency_test = bxl_main (
impl = _dependency_test_impl,
Real example: Build BXL (/refs/buck2/tests/core/bxl/test_build_data/build.bxl):
# Buck2: BXL for building targets and inspecting results
# Build target and collect artifacts
for target, value in ctx. build (ctx.cli_args.target). items ():
target. raw_target (): ctx.output. ensure_multiple (value. artifacts ())
ctx.output. print_json (outputs)
"target" : cli_args. target_label (),
def _impl_build_stats ( ctx ):
for target, value in ctx. build (ctx.cli_args.targets). items ():
artifacts = value. artifacts ()
failures = value. failures ()
stats[target. raw_target ()] = {
"artifacts" : len (artifacts),
"failures" : len (failures),
ctx.output. print_json (stats)
impl = _impl_build_stats,
"targets" : cli_args. target_expr (),
# Query for targets first
universe = ctx. target_universe ( "..." ). target_set ()
targets = ctx. cquery (). kind ( "trivial_build" , universe)
for value in ctx. build (targets). values ():
outputs. extend (ctx.output. ensure_multiple (value. artifacts ()))
ctx.output. print (sep = " \n " , * outputs)
cquery_build_test = bxl_main (
Real example: CLI Args BXL (/refs/buck2/tests/core/bxl/test_cli_data/cli_args.bxl):
# Buck2: BXL with comprehensive CLI argument support
ctx.output. print ( "bool_arg: " + repr (ctx.cli_args.bool_arg))
ctx.output. print ( "string_arg: " + repr (ctx.cli_args.string_arg))
ctx.output. print ( "int_arg: " + repr (ctx.cli_args.int_arg))
ctx.output. print ( "float_arg: " + repr (ctx.cli_args.float_arg))
ctx.output. print ( "optional: " + repr (ctx.cli_args.optional))
ctx.output. print ( "enum_type: " + repr (ctx.cli_args.enum_type))
ctx.output. print ( "target: " + repr (ctx.cli_args.target))
ctx.output. print ( "list: " + repr (ctx.cli_args.list_type))
"bool_arg" : cli_args. bool (),
"bool_arg_with_default" : cli_args. bool ( True ),
"string_arg" : cli_args. string ( "default" ),
"int_arg" : cli_args. int (),
"float_arg" : cli_args. float (),
"optional" : cli_args. option (cli_args. string ()),
"enum_type" : cli_args. enum ([ "a" , "b" ]),
"target" : cli_args. target_label (),
"configured_target" : cli_args. configured_target_label (),
"sub_target" : cli_args. sub_target (),
"list_type" : cli_args. list (cli_args. int ()),
cli_test_short = bxl_main (
impl = _impl_cli_test_short,
"bool_arg" : cli_args. bool (short = "b" ),
"string_arg" : cli_args. string (short = "s" ),
"int_arg" : cli_args. int (short = "i" ),
"target" : cli_args. target_label (short = "t" ),
def _impl_cli_json_arg ( ctx ):
my_json = ctx.cli_args.my_json
_assert_eq ( type (my_json[ "int" ]), "int" )
_assert_eq ( type (my_json[ "string" ]), "string" )
_assert_eq (my_json[ "list" ], [ 1 , 2 , 3 ])
impl = _impl_cli_json_arg,
"my-json" : cli_args. json (short = "j" ),
Real example: rust-analyzer Integration (/refs/buck2/prelude/rust/rust-analyzer/resolve_deps.bxl):
# Buck2: BXL for IDE integration (rust-analyzer)
load ( "@prelude//rust:link_info.bzl" , "RustLinkInfo" )
load ( "@prelude//rust/rust-analyzer:provider.bzl" , "RustAnalyzerInfo" )
TargetInfo = dict [ str , typing. Any ]
ExpandedAndResolved = record (
expanded_targets = list [ TargetLabel ],
queried_proc_macros = dict [ TargetLabel , MacroOutput ],
resolved_deps = dict [ TargetLabel , TargetInfo ],
def materialize (ctx: bxl. Context , target: bxl. ConfiguredTargetNode ) -> Artifact :
analysis = ctx. analysis (target)
sources = analysis. providers ()[ DefaultInfo ].sub_targets[ "sources" ][ DefaultInfo ].default_outputs[ 0 ]
def _process_target_config (
target: bxl. ConfiguredTargetNode ,
analysis: bxl. AnalysisResult ,
in_workspace: bool ) -> TargetInfo :
target = target. unwrap_forward ()
providers = analysis. providers ()
ra_info = providers[ RustAnalyzerInfo ]
resolved_attrs = target. resolved_attrs_eager (ctx)
# Convert sources to absolute paths
srcs = list (resolved_attrs.srcs)
# Remove configured platform from deps
deps = [dep.label. raw_target () for dep in ra_info.rust_deps]
# Build target info for rust-analyzer
attrs = target. attrs_eager ()
"crate" : ra_info.crate.simple,
"crate_root" : ra_info.crate_root,
"edition" : ra_info.edition,
"env" : {k: cmd_args (v, delimiter = "" ) for k, v in ra_info.env. items ()},
"features" : ra_info.features,
"in_workspace" : in_workspace,
"kind" : target.rule_type,
"label" : target.label. raw_target (),
"name" : resolved_attrs.name,
"proc_macro" : _get_nullable_attr (attrs, "proc_macro" ),
"project_relative_buildfile" : ctx.fs. project_rel_path (target.buildfile_path),
"rustc_flags" : ra_info.rustc_flags,
"source_folder" : materialize (ctx, target),
Feature Bazel Aspects Buck2 BXL Primary purpose Augment build graph Query and script builds Execution During analysis phase Standalone execution Graph access Shadow graph parallel to targets Full graph introspection Can build targets Part of normal build Yes, explicitly via ctx.build() CLI arguments Via rule attributes Native CLI parsing IDE integration Yes (aspects generate data) Yes (primary use case) Artifact access Through providers Direct materialization Query capabilities Limited (via rule attrs) Full uquery/cquery/aquery Output Providers to consumers Print, JSON, artifacts Reusability Attach to rules Standalone scripts
Skyframe is Bazel’s incremental evaluation framework.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Loading │ -> │ Analysis │ -> │ Execution │
└─────────────┘ └─────────────┘ └─────────────┘
Parse BUILD Create actions Run actions
Evaluate macros Resolve configs Write outputs
Build target Check providers Cache results
Key Skyframe characteristics:
SkyValues (nodes) : Immutable objects containing build data
SkyFunctions : Build nodes based on keys and dependent nodes
Incremental invalidation : Precisely invalidates changed nodes
Parallel evaluation : Functions without dependencies run in parallel
Phase boundaries : Loading must complete before analysis, etc.
Starlark-java runtime features (unique to Bazel):
Mutability protocol : Thread isolation via try-with-resources pattern - values created within a Mutability scope are automatically frozen when the scope exits, preventing data races without explicit synchronization
Step counting : Execution can be limited via setMaxExecutionSteps() to prevent infinite loops - counts logical operations, not wall-clock time
CPU profiling hooks : Native integration for performance analysis at function call boundaries
GuardedValues : Bindings can be conditionally available based on semantic flags and client data
DICE is Buck2’s single-graph incremental computation system.
┌─────────────────────────────────────────────────────────┐
│ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │Parse │ → │Config│ → │Analyze│ → │Action│ → │Output│ │
│ └──────┘ └──────┘ └──────┘ └──────┘ └──────┘ │
│ All nodes in single dependency graph │
│ No phase boundaries - continuous parallelism │
└─────────────────────────────────────────────────────────┘
Key DICE characteristics:
No phases : Single incremental dependency graph
Continuous parallelism : Targets can transition through states in parallel
Precise invalidation : Changes invalidate only affected nodes
RE-first design : Remote execution integrated from the start
Aspect Bazel Skyframe Buck2 DICE Graph structure Per-phase graphs Single unified graph Parallelism Within phases Across all operations Invalidation Per-phase + cross-phase Single invalidation pass Memory model JVM garbage collection Rust ownership (no GC) Configuration Transitions between phases Modifiers (experimental) Dynamic deps Limited (aspects) dynamic_output actions
Bazel restricts dynamic dependencies to maintain queryability:
# Bazel: Generate BUILD files externally, then run Bazel
# Or use aspects to gather info first
name = "generated_build" ,
srcs = [ ":gather_info" ], # Aspect output
outs = [ "generated_BUILD" ],
cmd = "python generate_build.py $< > $@" ,
Real example from Buck2 (/refs/buck2/prelude/http_archive/unarchive.bzl):
# Buck2: Dynamic output for processing archive contents
exec_deps: HttpArchiveExecDeps ,
sub_targets: list[str] | dict[ str , list[str]]):
# First action: list archive contents
exclusions = ctx.actions. declare_output (output_name + "_exclusions" )
contents = ctx.actions. declare_output (output_name + "_contents" )
tar_script, _ = ctx.actions. write (
" {} _listing. {} " . format (output_name, ext),
[ cmd_args (archive, format = "tar --list -f {} > " + first_param)],
cmd_args (interpreter + [tar_script, contents. as_output ()], hidden = [archive]),
category = "process_exclusions" ,
# Dynamic output: create exclusion list after reading contents
def create_exclusion_list (ctx: AnalysisContext , artifacts , outputs ):
# Read the contents file (artifact is now available)
files = artifacts[contents]. read_string (). splitlines ()
exclude_regexen = [ regex (e) for e in excludes]
for exclusion in exclude_regexen:
# Write the exclusion list
ctx.actions. write (outputs[exclusions], " \n " . join (exclusion_list))
ctx.actions. dynamic_output (
dynamic = [contents], # Read these artifacts
inputs = [], # Additional inputs
outputs = [exclusions. as_output ()], # Bind these outputs
f = create_exclusion_list, # Function to run
exclude_flags. append ( cmd_args (exclusions, format = "--exclude-from= {} " ))
exclude_hidden. append (exclusions)
# Continue with unarchive action using exclusion list
output = ctx.actions. declare_output (output_name, dir = True )
# ... rest of implementation
Anonymous targets enable analysis-time graph modifications:
# Buck2: Anonymous targets for shared compilation
def _swift_library_impl (ctx: AnalysisContext ) -> list [ Provider ]:
# Create anonymous target for shared module compilation
# Multiple libraries with same Swift module share this target
module_target = ctx.actions. anon_target (
"module_name" : ctx.attrs.module_name,
"sources" : ctx.attrs.srcs,
# Wait for anonymous target analysis
module_info = module_target. artifact ( "module" )
return [ DefaultInfo (default_output = module_info)]
Both systems implement the Remote Execution API :
// Shared protocol (simplified)
rpc Execute ( ExecuteRequest ) returns ( Operation );
rpc WaitExecution ( WaitExecutionRequest ) returns ( Operation );
service ContentAddressableStorage {
rpc FindMissingBlobs ( FindMissingBlobsRequest ) returns ( FindMissingBlobsResponse );
rpc BatchUpdateBlobs ( BatchUpdateBlobsRequest ) returns ( BatchUpdateBlobsResponse );
rpc BatchReadBlobs ( BatchReadBlobsRequest ) returns ( BatchReadBlobsResponse );
rpc GetActionResult ( GetActionResultRequest ) returns ( ActionResult );
rpc UpdateActionResult ( UpdateActionResultRequest ) returns ( ActionResult );
Aspect Bazel Buck2 Design philosophy RE added to existing system RE-first from inception Local execution Primary mode, RE optional Special case of RE Directory hashing Computed when needed Pre-computed for RE Build Event Protocol Yes (BEP) No (uses BuckEvent) Remote persistent workers Supported Local workers only Client configurability Extensive tuning options Less configurable Hermeticity Sandbox available locally RE enforces, local unsandboxed
Both systems work with:
Real example from Bazel (/refs/bazel/tools/build_rules/test_rules.bzl):
# Bazel: Test rule utilities
def success_target ( ctx , msg , exe = None ):
""" Return a success for an analysis test. """
exe = exe or ctx.outputs.executable
content = "#!/bin/bash \n cat <<'__eof__' \n " + msg + " \n __eof__ \n echo" ,
return [ DefaultInfo (files = depset ([exe]))]
def failure_target ( ctx , msg , exe = None ):
""" Return a failure for an analysis test. """
exe = exe or ctx.outputs.executable
content = "#!/bin/bash \n cat >&2 <<'__eof__' \n " + msg + " \n __eof__ \n exit 1" ,
return [ DefaultInfo (files = depset ([exe]))]
def _rule_test_rule_impl ( ctx ):
""" Check that a rule generates the desired outputs and providers. """
rule_name = str (rule_.label)
generates = sorted (ctx.attr.generates)
strip_prefix (prefix, f.short_path)
for f in rule_.files. to_list ()
if generates != generated:
fail ( "rule %s generates %s not %s " %
(rule_name, repr (generated), repr (generates)))
for k in ctx.attr.provides. keys ():
v = repr ( getattr (rule_, k))
fail ( "rule %s doesn't provide attribute %s " % (rule_name, k))
# ... create verification script
return success_target (ctx, "success" , exe = exe)
"rule" : attr. label (mandatory = True ),
"generates" : attr. string_list (),
"provides" : attr. string_dict (),
implementation = _rule_test_rule_impl,
def rule_test ( name , rule , generates = None , provides = None , ** kwargs ):
""" Macro to test rule outputs and providers. """
visibility = [ "//visibility:private" ],
deps = [ BASH_RUNFILES_DEP ],
def file_test ( name , file , content = None , regexp = None , matches = None , ** kwargs ):
""" Test that a file has given content or matches a pattern. """
matches = matches if matches != None else - 1 ,
sh_test (name = name, srcs = [name + "_impl" ], ** kwargs)
# Buck2: Test rule with providers
srcs = [ "mylib_test.cc" ],
# Buck2 test-specific attributes
contacts = [ "oncall+myteam@example.com" ],
labels = [ "unit" , "fast" ],
"TEST_DATA_DIR" : "$(location :test_data)" ,
# Using BXL for test introspection
def _test_discovery_impl ( ctx ):
""" Discover all tests matching a pattern. """
universe = ctx. target_universe (ctx.cli_args.pattern). target_set ()
tests = ctx. cquery (). kind ( ".*_test" , universe)
analysis = ctx. analysis (test)
providers = analysis. providers ()
"label" : str (test.label),
"deps" : len (providers[ DefaultInfo ].default_outputs),
ctx.output. print_json (test_info)
test_discovery = bxl_main (
impl = _test_discovery_impl,
"pattern" : cli_args. string ( "..." ),
bazel query "deps(//my:target)" # All dependencies
bazel query "rdeps(//..., //my:target)" # Reverse dependencies
bazel query "kind(cc_library, //...)" # Filter by rule kind
bazel query "attr(visibility, public, //...)" # Filter by attribute
# Configured query (analysis phase)
bazel cquery "deps(//my:target)" --output=jsonproto
bazel aquery "//my:target" --output=jsonproto
bazel aquery "mnemonic(CppCompile, //my:target)"
# Build graph visualization
bazel query "deps(//my:target)" --output=graph | dot -Tpng > graph.png
buck2 uquery "deps(//my:target)" # Unconfigured query
buck2 cquery "deps(//my:target)" # Configured query
buck2 aquery "//my:target" # Action query
# BXL for advanced queries
buck2 bxl //queries:analysis.bxl:providers_test
buck2 bxl //queries:build.bxl:build_test -- --target //my:target
# Target universe queries in BXL
buck2 bxl //my.bxl:find_tests -- --pattern "//src/..."
BXL query example (/refs/buck2/tests/core/bxl/test_target_universe_data/target_universe.bxl):
def _target_universe_test ( ctx ):
pattern = "some_cell//:inner"
target_universe = ctx. target_universe (pattern)
# Direct targets used to construct universe
direct_target_set = target_universe. target_set ()
# All targets in the universe (including transitive)
universe_target_set = target_universe. universe_target_set ()
# Query within the universe
libraries = ctx. cquery (). kind ( "^library$" , universe_target_set)
ctx.output. print ( "Direct: {} " . format ( len (direct_target_set)))
ctx.output. print ( "Universe: {} " . format ( len (universe_target_set)))
ctx.output. print ( "Libraries: {} " . format ( len (libraries)))
target_universe_test = bxl_main (
impl = _target_universe_test,
Capability Bazel Buck2 Unconfigured query bazel querybuck2 uqueryConfigured query bazel cquerybuck2 cqueryAction query bazel aquerybuck2 aqueryProgrammatic query External tools BXL scripts Build + query Separate commands BXL ctx.build() Artifact inspection bazel runBXL ctx.output.ensure() Custom output format --output=jsonprotoBXL ctx.output.print_json()
# Bazel: Configuration transition
def _platform_transition_impl ( settings , attr ):
"//command_line_option:cpu" : attr.target_cpu,
"//command_line_option:compilation_mode" : "opt" ,
platform_transition = transition (
implementation = _platform_transition_impl,
"//command_line_option:cpu" ,
"//command_line_option:compilation_mode" ,
# Split transition (1:N) for fat binaries
def _fat_binary_transition_impl ( settings , attr ):
"arm64" : { "//command_line_option:cpu" : "arm64" },
"x86_64" : { "//command_line_option:cpu" : "x86_64" },
fat_binary_transition = transition (
implementation = _fat_binary_transition_impl,
outputs = [ "//command_line_option:cpu" ],
cfg = platform_transition,
"deps" : attr. label_list (cfg = "target" ),
"tools" : attr. label_list (cfg = "exec" ),
"_allowlist_function_transition" : attr. label (
default = "@bazel_tools//tools/allowlists/function_transition_allowlist" ,
# Buck2: Platform-aware configuration with select
srcs = [ "common.cpp" ] + select ({
"config//os:linux" : [ "linux.cpp" ],
"config//os:macos" : [ "macos.cpp" ],
"config//os:windows" : [ "windows.cpp" ],
compiler_flags = select ({
"config//build:debug" : [ "-g" , "-O0" ],
"config//build:release" : [ "-O3" , "-DNDEBUG" ],
"config//os:linux" : [ "//third_party:pthread" ],
# Toolchain configuration
load ( "@prelude//toolchains:cxx.bzl" , "system_cxx_toolchain" )
"config//build:debug" : [ "-g" ],
"config//build:release" : [ "-O3" ],
# Bazel: Error in rule implementation
fail ( "srcs cannot be empty for rule {} " . format (ctx.label))
# Check provider requirements
for dep in ctx.attr.deps:
fail ( "Dependency {} does not provide MyInfo" . format (dep.label))
ERROR: /path/to/BUILD:10:8: in my_rule rule //pkg:target:
Traceback (most recent call last):
File "/path/to/rules.bzl", line 5, column 9, in _my_rule_impl
fail("srcs cannot be empty for rule {}".format(ctx.label))
Error in fail: srcs cannot be empty for rule //pkg:target
# Buck2: Error handling with expect utility
load ( "@prelude//utils:expect.bzl" , "expect" )
def _my_rule_impl (ctx: AnalysisContext ) -> list [ Provider ]:
"srcs cannot be empty for rule {} " . format (ctx.label)
# Type checking is automatic
# If ctx.attrs.src is wrong type, runtime error with stack trace
# Custom error handlers for actions
def _generate_error_handler (category: str , errorformats: list[str] | None ):
def handler (ctx: ActionErrorCtx ) -> list [ ActionSubError ]:
return ctx. parse_with_errorformat (
errorformats = errorformats,
Key Migration Steps
File renaming : BUILD -> BUCK, keep .bzl extension
Provider updates :
Replace depset with transitive sets
Add type annotations
Use provider_field() for typed fields
Rule API changes :
ctx.attr -> ctx.attrs
ctx.actions.declare_file() -> ctx.actions.declare_output()
Add .as_output() markers
Use cmd_args() for command building
Convert aspects to BXL for IDE/tooling integrations
Update toolchain references to explicit targets
Key Migration Steps
File renaming : BUCK -> BUILD
Type system downgrades :
Replace record with dicts
Replace enum with string constants
Remove union types
Provider updates :
Convert transitive sets to depsets
Remove provider_field() type annotations
Rule API changes :
ctx.attrs -> ctx.attr
Remove .as_output() markers
Convert cmd_args() to ctx.actions.args()
Rewrite BXL as aspects + external tools
Category Resources Documentation bazel.build - comprehensiveRules repos rules_cc, rules_java, rules_python, rules_go, rules_rust, etc. Community Bazel Slack , GitHub DiscussionsIDE Support IntelliJ plugin, VSCode extension, multiple LSPs Enterprise EngFlow, Aspect, BuildBuddy (commercial support)
Category Resources Documentation buck2.build - growingPrelude Single comprehensive standard library Community GitHub Issues, Discord IDE Support BXL-based (rust-analyzer, compilation DB) Enterprise Meta internal, emerging community