Skip to content

Bazel vs Buck2 Starlark Comparison

Bazel vs Buck2: The Definitive Starlark Deep Dive

Section titled “Bazel vs Buck2: The Definitive Starlark Deep Dive”

While both Bazel and Buck2 use Starlark as their extension language, the implementations differ significantly in architecture, capabilities, and philosophy. This comprehensive guide provides everything developers need to understand these differences when choosing between or migrating between these build systems.

AspectBazelBuck2
Core LanguageJava (with some C++)Rust
Starlark Interpreterstarlark-java (native)starlark-rust
Rule LocationMix of Java built-ins + Starlark100% Starlark (prelude)
Type SystemBasic annotationsAdvanced (record, enum, union)
Extension MechanismAspectsBXL (Buck Extension Language)
Build GraphPhased (loading, analysis, execution)Single incremental graph (DICE)
File NamesBUILD, .bzlBUCK, .bzl, .bxl
Remote ExecutionNative support (RE first design was added later)RE-first design from inception
Open Source20152023
Primary UsersGoogle, communityMeta, community

Part 1: Starlark Implementation Differences

Section titled “Part 1: Starlark Implementation Differences”

starlark-java (Bazel) vs starlark-rust (Buck2)

Section titled “starlark-java (Bazel) vs starlark-rust (Buck2)”

The most fundamental difference between Bazel and Buck2 lies in their Starlark interpreters. This affects performance, available language features, and extension capabilities.

Bazel uses starlark-java, a native Java implementation within the Bazel codebase:

// From net.starlark.java - Bazel's native Starlark implementation
// Located in src/main/java/net/starlark/java/
package net.starlark.java.eval;

Key characteristics:

  • JVM-based execution: Runs on the Java Virtual Machine alongside Bazel
  • JVM garbage collection: Uses Java’s garbage collector
  • Integer representation: Uses Java primitives with BigInteger fallback
  • Thread model: Integrates with Bazel’s Skyframe parallel evaluation
  • Freezing: All mutable values created during module initialization are frozen upon completion
  • Not standalone: Lives inside Bazel’s monorepo, not published as a separate library

Buck2 uses starlark-rust, a Rust implementation with significant extensions:

Key characteristics:

  • No garbage collection pauses: Rust’s ownership model eliminates GC overhead
  • Send/Sync semantics: Frozen values are Send/Sync, non-frozen values are not
  • Rich type extensions: record, enum, type annotations with runtime checking
  • Heap allocation: Garbage collected values allocated on a dedicated heap
  • DAP debugging support: Debug Adapter Protocol for interactive debugging
// From starlark-rust - easy interoperability between Rust types and Starlark
// Rust-friendly types where frozen values are Send/Sync
Aspectstarlark-java (Bazel)starlark-rust (Buck2)
EvaluationJVM-optimizedBytecode + optimizations
MemoryJVM GC managedManual (no GC pauses)
Integer perfJava primitives + BigIntegerOptimized
ParallelismSkyframe integrationRust Send/Sync semantics
DebuggingLimitedDAP support
Featurestarlark-java (Bazel)starlark-rust (Buck2)
RecursionDisabled by defaultSupported
Top-level forDisabled by defaultSupported
Type annotationsLimited checkingRuntime enforcement
record typeNot availableBuilt-in
enum typeNot availableBuilt-in
Union typesNot availableBuilt-in (A | B)
struct typeVia struct() builtinVia record

project/
WORKSPACE # Root workspace definition (legacy)
WORKSPACE.bazel # Alternative name
MODULE.bazel # Bzlmod dependency management (modern)
MODULE.bazel.lock # Lock file for dependencies
BUILD # Target definitions
BUILD.bazel # Alternative name for BUILD
.bazelrc # Build configuration
.bazelversion # Bazel version pinning
rules/
rules.bzl # Rule definitions
defs.bzl # Macro definitions
providers.bzl # Provider definitions

Buck2’s starlark-rust implementation includes significant type system extensions not available in Bazel’s starlark-java.

# Bazel: Type hints supported but loosely enforced
def compile(src, out):
"""Compile a source file.
Args:
src: Source file path (string expected)
out: Output file path (string expected)
"""
# No runtime type checking
pass
def typed_compile(src: str, out: str) -> None:
"""Type annotations allowed but provide limited checking."""
# Annotations are primarily documentation
pass

Records provide structured data types with compile-time field definitions. This is extensively used in Buck2’s prelude.

Real example from Buck2’s prelude (/refs/buck2/prelude/artifact_tset.bzl):

# Buck2: Define structured types with records
ArtifactInfoTag = enum(
# Describes artifacts required for debugging Swift code
"swift_debug_info",
)
ArtifactInfo = record(
label = field(Label),
artifacts = field(list[Artifact]),
tags = field(list[ArtifactInfoTag]),
)
# Records can have optional fields with defaults
ArtifactTSet = record(
_tset = field([_ArtifactTSet, None], None),
)
# Usage - creating instances
info = ArtifactInfo(
label = ctx.label,
artifacts = [output],
tags = [ArtifactInfoTag("swift_debug_info")],
)
# Type-safe field access
print(info.label) # Label
print(info.artifacts) # list[Artifact]

Real example from Buck2’s cxx rules (/refs/buck2/prelude/cxx/cxx_types.bzl):

# Complex record with many fields and defaults
CxxRuleSubTargetParams = record(
argsfiles = field(bool, True),
compilation_database = field(bool, True),
clang_remarks = field(bool, True),
clang_traces = field(bool, True),
headers = field(bool, True),
link_group_map = field(bool, True),
link_style_outputs = field(bool, True),
xcode_data = field(bool, True),
objects = field(bool, True),
bitcode_bundle = field(bool, True),
header_unit = field(bool, True),
)
# Record with callable fields
CxxRuleConstructorParams = record(
rule_type = str,
headers_layout = CxxHeadersLayout,
is_test = field(bool, False),
extra_preprocessors = field(list[CPreprocessor], []),
# Callable field with complex signature
output_style_sub_targets_and_providers_factory = field(
typing.Callable,
lambda _link_style, _context, _output: ({}, [])
),
# Union type field
error_handler = field([typing.Callable, None], None),
)

Real examples from Buck2’s prelude (/refs/buck2/prelude/cxx/cxx_toolchain_types.bzl):

# Buck2: Define enumerations for type-safe constants
LinkerType = enum("gnu", "darwin", "windows", "wasm")
ShlibInterfacesMode = enum(
"disabled",
"defined_only",
"stub_from_library",
"stub_from_object_files",
"stub_from_linker_invocation",
)
DepTrackingMode = enum(
"makefile", # gcc -MD -MF depfile
"show_includes", # cl.exe /showIncludes
"show_headers", # clang/gcc -H
"none", # No dep tracking (e.g., ml64)
)
CxxObjectFormat = enum(
"native",
"bitcode",
"embedded-bitcode",
"swift",
)
PicBehavior = enum(
"always_enabled", # x86_64, arm64
"supported", # -fPIC changes output
"not_supported", # Windows
)
# Usage in functions
def is_bitcode_format(format: CxxObjectFormat) -> bool:
return format in [
CxxObjectFormat("bitcode"),
CxxObjectFormat("embedded-bitcode")
]
# Enum properties
print(LinkerType.values()) # ["gnu", "darwin", "windows", "wasm"]
print(LinkerType("darwin").index) # 1
print(len(LinkerType)) # 4
# Buck2: Union types with | operator
def process(value: int | str | None) -> str:
if value == None:
return "none"
if type(value) == "int":
return str(value)
return value
# In record field definitions
CxxToolchainInfo = provider(
fields = {
"lipo": provider_field([RunInfo, None], default = None),
"minimum_os_version": provider_field([str, None], default = None),
"objc_compiler_info": provider_field(
[ObjcCompilerInfo, None],
default = None
),
},
)
# Complex union types
CGoBuildContext = record(
_cxx_toolchain = field(Dependency | None, None),
)
FeatureBazel (starlark-java)Buck2 (starlark-rust)
Basic annotationsYesYes
Generic types (list[T])LimitedFull
Union types (A | B)NoYes
record typeNoYes
enum typeNoYes
field() with defaultsNoYes
Static type checkingLimitedbuck2 starlark typecheck
Runtime type enforcementPartialFull
typing.CallableNoYes
provider_field typedNoYes

Buck2’s prelude is a comprehensive standard library of rules, all written in Starlark. It’s available at github.com/facebook/buck2-prelude.

Structure of the prelude:

prelude/
prelude.bzl # Entry point, exports native
native.bzl # Native rule definitions (21KB)
rules.bzl # Rule registration
rules_impl.bzl # Rule implementations (26KB)
genrule.bzl # Generic rule implementation (20KB)
cxx/
cxx.bzl # C++ rules (41KB)
cxx_library.bzl # Library implementation (102KB)
cxx_toolchain.bzl # Toolchain definition (23KB)
cxx_types.bzl # Type definitions (14KB)
rust/
rust_library.bzl # Rust library rules
rust-analyzer/
resolve_deps.bxl # IDE integration
check.bxl # Type checking
python/
python_library.bzl # Python rules
go/
package_builder.bzl # Go package rules
cgo_builder.bzl # CGo integration
apple/
apple_library.bzl # Apple platform rules
swift/ # Swift compilation
toolchains/
cxx.bzl # C++ toolchain
rust.bzl # Rust toolchain
python.bzl # Python toolchain

Real prelude entry point (/refs/buck2/prelude/prelude.bzl):

# Copyright (c) Meta Platforms, Inc. and affiliates.
load("@prelude//:native.bzl", _native = "native")
load(
"@prelude//utils:buckconfig.bzl",
_read_config = "read_config_with_logging",
_read_root_config = "read_root_config_with_logging",
log_buckconfigs = "LOG_BUCKCONFIGS",
)
__overridden_builtins__ = {
"read_config": _read_config,
"read_root_config": _read_root_config,
} if log_buckconfigs else {}
load_symbols(__overridden_builtins__)
# Public symbols become globals everywhere except bzl files in prelude
native = _native

Bazel uses a distributed ecosystem of rule repositories:

# MODULE.bazel - Modern dependency management
bazel_dep(name = "rules_cc", version = "0.0.9")
bazel_dep(name = "rules_java", version = "7.3.2")
bazel_dep(name = "rules_python", version = "0.31.0")
bazel_dep(name = "rules_go", version = "0.46.0")
bazel_dep(name = "rules_rust", version = "0.40.0")
# Or WORKSPACE (legacy)
http_archive(
name = "rules_cc",
urls = ["https://github.com/bazelbuild/rules_cc/..."],
sha256 = "...",
)

Key Bazel rule repositories:

RepositoryPurposeMaintained By
rules_ccC/C++ compilationBazel team
rules_javaJava compilationBazel team
rules_pythonPython packagingCommunity
rules_goGo compilationCommunity
rules_rustRust compilationCommunity
rules_protoProtocol buffersBazel team
rules_pkgPackage creationBazel team
AspectBuck2 PreludeBazel rules_*
LocationSingle git submoduleMultiple external repos
UpdatesAtomic prelude updatesPer-repo version management
ConsistencyGuaranteed compatibleMay have version conflicts
CustomizationFork and modifyFork specific rules repo
ToolchainsExplicit target depsToolchain resolution registry

Providers are the primary mechanism for passing information between rules in both systems.

Real example from Bazel (/refs/bazel/src/main/starlark/builtins_bzl/common/xcode/providers.bzl):

# Bazel: Provider with init function for complex initialization
def _xcode_version_info_init(
*,
ios_sdk_version,
ios_minimum_os_version,
visionos_sdk_version,
visionos_minimum_os_version,
watchos_sdk_version,
watchos_minimum_os_version,
tvos_sdk_version,
tvos_minimum_os_version,
macos_sdk_version,
macos_minimum_os_version,
xcode_version,
availability,
xcode_version_flag,
include_xcode_execution_info):
execution_requirements = {
"requires-darwin": "",
"supports-xcode-requirements-set": "",
}
if availability == "LOCAL":
execution_requirements["no-remote"] = ""
elif availability == "REMOTE":
execution_requirements["no-local"] = ""
# Methods as closures over init parameters
def _minimum_os_for_platform_type(platform_type):
if platform_type in (platform_type_struct.ios, platform_type_struct.catalyst):
return dotted_ios_minimum_os
elif platform_type == platform_type_struct.tvos:
return dotted_tvos_minimum_os
# ... more cases
fail("Unhandled platform type: {}".format(platform_type))
return {
"xcode_version": lambda: _xcode_version(xcode_version),
"minimum_os_for_platform_type": _minimum_os_for_platform_type,
"sdk_version_for_platform": _sdk_version_for_platform,
"availability": lambda: availability.lower(),
"execution_info": lambda: execution_requirements,
}
XcodeVersionInfo, _new_xcode_version_info = provider(
doc = """The set of Apple versions computed from command line options.""",
fields = {
"xcode_version": "Zero-argument function returning dotted_version",
"minimum_os_for_platform_type": "Function taking platform_type element",
"sdk_version_for_platform": "Function taking platform element",
"availability": "Zero-argument function returning availability string",
"execution_info": "Zero-argument function returning execution requirements",
},
init = _xcode_version_info_init,
)

Standard Bazel provider pattern:

# Bazel: Simple provider definition
MyInfo = provider(
doc = "Contains my rule information",
fields = {
"output": "The main output file",
"transitive_deps": "Depset of transitive dependencies",
"metadata": "Dict of string metadata",
},
)
def _my_rule_impl(ctx):
output = ctx.actions.declare_file(ctx.label.name + ".out")
ctx.actions.run(
outputs = [output],
inputs = ctx.files.srcs,
executable = ctx.executable._tool,
arguments = [output.path] + [f.path for f in ctx.files.srcs],
)
# Collect transitive deps
transitive = [dep[MyInfo].transitive_deps for dep in ctx.attr.deps if MyInfo in dep]
return [
DefaultInfo(files = depset([output])),
MyInfo(
output = output,
transitive_deps = depset([output], transitive = transitive),
metadata = {"rule_type": "my_rule"},
),
]
my_rule = rule(
implementation = _my_rule_impl,
attrs = {
"srcs": attr.label_list(allow_files = True),
"deps": attr.label_list(providers = [MyInfo]),
"_tool": attr.label(
executable = True,
cfg = "exec",
default = "//tools:processor",
),
},
)

Real example from Buck2 (/refs/buck2/prelude/cxx/cxx_toolchain_types.bzl):

# Buck2: Provider with typed fields using provider_field
LinkerInfo = provider(
fields = {
"archiver": provider_field(typing.Any, default = None),
"archiver_flags": provider_field(typing.Any, default = None),
"archiver_reads_inputs": provider_field(bool, default = True),
"binary_extension": provider_field(typing.Any, default = None),
"generate_linker_maps": provider_field(typing.Any, default = None),
"link_binaries_locally": provider_field(typing.Any, default = None),
"link_libraries_locally": provider_field(typing.Any, default = None),
"link_style": provider_field(typing.Any, default = None),
"link_weight": provider_field(int, default = 1),
"linker": provider_field(typing.Any, default = None),
"linker_flags": provider_field(typing.Any, default = None),
"lto_mode": provider_field(typing.Any, default = None),
"object_file_extension": provider_field(typing.Any, default = None),
"shlib_interfaces": provider_field(ShlibInterfacesMode),
"shared_library_name_format": provider_field(typing.Any, default = None),
"type": LinkerType, # Enum type!
"is_pdb_generated": provider_field(typing.Any, default = None),
"sanitizer_runtime_enabled": provider_field(bool, default = False),
"sanitizer_runtime_files": provider_field(list[Artifact], default = []),
},
)
# Multiple related providers for compiler info
CCompilerInfo = provider(fields = [
"compiler",
"compiler_type",
"compiler_flags",
"argsfile",
"preprocessor",
"preprocessor_type",
"preprocessor_flags",
"allow_cache_upload",
"supports_two_phase_compilation",
])
# Buck2: Library info provider with Label type
CxxLibraryInfo = provider(
fields = dict(
target = provider_field(Label),
labels = provider_field(list[str]),
),
)

Buck2 rule returning providers (/refs/buck2/prelude/genrule.bzl):

def genrule_impl(ctx: AnalysisContext) -> list[Provider]:
# Declare outputs
out_artifact = ctx.actions.declare_output(
GENRULE_OUT_DIR,
dir = True,
has_content_based_path = content_based
)
# Build command with cmd_args
cmd = cmd_args(ctx.attrs.cmd, ignore_artifacts = _ignore_artifacts(ctx))
# Environment setup
env_vars = {
"GEN_DIR": "GEN_DIR_DEPRECATED",
"OUT": out_artifact.as_output(),
"SRCDIR": cmd_args(srcs_artifact, format = "./{}"),
"SRCS": srcs,
}
# Run action with category for tracking
ctx.actions.run(
cmd_args(script_args, hidden = [cmd, srcs_artifact, out_artifact.as_output()]),
env = env_vars,
local_only = local_only,
prefer_local = prefer_local,
weight = value_or(ctx.attrs.weight, 1),
allow_cache_upload = cacheable,
category = category,
identifier = identifier,
error_handler = genrule_error_handler,
)
# Build providers
providers = [DefaultInfo(
default_outputs = default_outputs,
sub_targets = sub_targets,
other_outputs = other_outputs,
)]
if getattr(ctx.attrs, "executable", False):
providers.append(RunInfo(args = cmd_args(default_outputs)))
return providers
ConceptBazelBuck2
Context typectxAnalysisContext
Declare outputctx.actions.declare_file()ctx.actions.declare_output()
Attribute accessctx.attr.srcsctx.attrs.srcs
File listctx.files.srcsctx.attrs.srcs (already artifacts)
Command builderctx.actions.args()cmd_args()
Run actionctx.actions.run()ctx.actions.run() with category
Exec dependencycfg = "exec"attrs.exec_dep()
Output markerN/A.as_output()
Typed provider fieldsNoprovider_field(type, default)

# Bazel: Using depsets for efficient transitive data
def _impl(ctx):
# Direct files from this target
direct_files = ctx.files.srcs
# Collect transitive depsets from deps
transitive_files = [
dep[DefaultInfo].files
for dep in ctx.attr.deps
]
# Create depset with ordering
all_files = depset(
direct = direct_files,
transitive = transitive_files,
order = "postorder", # topological, postorder, preorder
)
# AVOID: Converting to list is O(n), use sparingly
file_list = all_files.to_list()
# Use depset in command args for lazy expansion
args = ctx.actions.args()
args.add_all(all_files)
return [DefaultInfo(files = all_files)]

Real example from Buck2 (/refs/buck2/prelude/artifact_tset.bzl):

# Buck2: Transitive sets with projections
_ArtifactTSet = transitive_set(
args_projections = {
# Transform values for command line
"artifacts": _get_artifacts,
},
)
def _get_artifacts(entries: list[ArtifactInfo]) -> list[Artifact]:
return flatten([entry.artifacts for entry in entries])
def make_artifact_tset(
actions: AnalysisActions,
label: Label | None = None,
artifacts: list[Artifact] = [],
infos: list[ArtifactInfo] = [],
children: list[ArtifactTSet] = [],
tags: list[ArtifactInfoTag] = []) -> ArtifactTSet:
# Filter None children efficiently
children_tsets = [c._tset for c in children if c._tset != None]
# Optimization: return singleton for empty case
if not artifacts and not infos and not children_tsets:
return EmptyArtifactTSet
# Optimization: single child passthrough
if not artifacts and not infos and len(children_tsets) == 1:
return children[0]
# Build list of all non-child values
values = []
if artifacts:
values.append(ArtifactInfo(label = label, artifacts = artifacts, tags = tags))
values.extend(infos)
kwargs = {}
if values:
kwargs["value"] = values
if children_tsets:
kwargs["children"] = children_tsets
return ArtifactTSet(
_tset = actions.tset(_ArtifactTSet, **kwargs),
)
def project_artifacts(
actions: AnalysisActions,
tsets: ArtifactTSet | list[ArtifactTSet] = []) -> list[TransitiveSetArgsProjection]:
"""Project artifacts for use in command lines."""
if is_list(tsets):
tset = make_artifact_tset(actions = actions, children = tsets)
else:
tset = tsets
if tset._tset == None:
return []
# Efficient projection into cmd_args
return [tset._tset.project_as_args("artifacts")]

Using transitive sets in commands:

# Buck2: Efficient use in command construction
def _compile_impl(ctx: AnalysisContext) -> list[Provider]:
# Create transitive set with projections
include_tset = ctx.actions.tset(
IncludeTSet,
value = ctx.attrs.includes,
children = [dep[CxxInfo].includes for dep in ctx.attrs.deps],
)
# Use projection directly in command (very efficient)
ctx.actions.run(
cmd_args(
ctx.attrs._compiler[RunInfo],
"-c", ctx.attrs.src,
# Project as flags without materializing full list
include_tset.project_as_args("as_flags"),
"-o", output.as_output(),
),
category = "cxx_compile",
)
FeatureBazel depsetBuck2 transitive_set
DeduplicationYesYes
Ordering controlYes (3 modes)Via projections
ProjectionsNoYes (args_projections, json_projections)
ReductionsNoYes (aggregations)
Graph integrationMemory optimizationWired into dep graph
Lazy expansionVia args.add_allVia project_as_args

These are the primary extension mechanisms for each system, serving different but overlapping purposes.

Aspects augment the build graph with additional information and actions, propagating along dependency edges.

Real example from Bazel (/refs/bazel/tools/compliance/gather_packages.bzl):

# Bazel: Aspect for collecting license information across dep graph
TransitivePackageInfo = provider(
"""Transitive list of all SBOM relevant dependencies.""",
fields = {
"top_level_target": "Label: The top level target label",
"license_info": "depset(LicenseInfo)",
"package_info": "depset(PackageInfo)",
"packages": "depset(label)",
"target_under_license": "Label: Associated with licenses",
"traces": "list(string) - diagnostic traces",
},
)
# Singleton for efficiency
NULL_INFO = TransitivePackageInfo(
license_info = depset(),
package_info = depset(),
packages = depset()
)
def should_traverse(ctx, attr):
"""Check if the dependent attribute should be traversed."""
k = ctx.rule.kind
for filters in [aspect_filters, user_aspect_filters]:
always_ignored = filters.get("*", [])
if k in filters:
attr_matches = filters[k]
if (attr in attr_matches or
"*" in attr_matches or
("_*" in attr_matches and attr.startswith("_")) or
attr in always_ignored):
return False
return True
def _gather_package_impl(target, ctx):
# Skip exec configuration targets
if "-exec" in ctx.bin_dir.path:
return [NULL_INFO]
# Gather direct license attachments
licenses = []
if hasattr(ctx.rule.attr, "applicable_licenses"):
for dep in ctx.rule.attr.applicable_licenses:
if LicenseInfo in dep:
licenses.append(dep[LicenseInfo])
# Gather transitive info from dependencies
trans_license_info = []
trans_packages = []
traces = []
for name in dir(ctx.rule.attr):
if not should_traverse(ctx, name):
continue
a = getattr(ctx.rule.attr, name)
if type(a) != type([]):
a = [a]
for dep in a:
if type(dep) != "Target":
continue
if TransitivePackageInfo in dep:
info = dep[TransitivePackageInfo]
if info.license_info:
trans_license_info.append(info.license_info)
return [TransitivePackageInfo(
target_under_license = target.label,
license_info = depset(direct = licenses, transitive = trans_license_info),
packages = depset(direct = packages, transitive = trans_packages),
traces = traces[:10], # Limit trace output
)]
gather_package_info = aspect(
doc = """Collects License providers into TransitivePackageInfo.""",
implementation = _gather_package_impl,
attr_aspects = ["*"], # Propagate along all deps
attrs = {
"_trace": attr.label(default = "@rules_license//rules:trace_target"),
},
provides = [TransitivePackageInfo],
apply_to_generating_rules = True,
)
# Using the aspect in a rule
packages_used = rule(
doc = """Gather transitive package info and write as JSON.""",
implementation = _packages_used_impl,
attrs = {
"target": attr.label(
aspects = [gather_package_info],
allow_files = True,
),
"out": attr.output(mandatory = True),
},
)

BXL scripts are standalone Starlark programs that can query, analyze, and build targets interactively.

Real example: Analysis BXL (/refs/buck2/tests/core/bxl/test_analysis_data/analysis.bxl):

# Buck2: BXL for analyzing target providers
load(":defs.bzl", "FooInfo")
def _providers_test_impl(ctx):
# Get configured target
node = ctx.configured_targets("root//:provides_foo")
# Analyze and access providers
providers = ctx.analysis(node).providers()
ctx.output.print(providers[FooInfo])
# Can also analyze by label
providers = ctx.analysis(node.label).providers()
ctx.output.print(providers[FooInfo])
providers_test = bxl_main(
impl = _providers_test_impl,
cli_args = {},
)
def _dependency_test_impl(ctx):
node = ctx.configured_targets("root//:stub")
# Convert analysis result to dependency
dep = ctx.analysis(node).as_dependency()
ctx.output.print(type(dep))
ctx.output.print(dep.label)
dependency_test = bxl_main(
impl = _dependency_test_impl,
cli_args = {},
)

Real example: Build BXL (/refs/buck2/tests/core/bxl/test_build_data/build.bxl):

# Buck2: BXL for building targets and inspecting results
def _impl(ctx):
outputs = {}
# Build target and collect artifacts
for target, value in ctx.build(ctx.cli_args.target).items():
outputs.update({
target.raw_target(): ctx.output.ensure_multiple(value.artifacts())
})
ctx.output.print_json(outputs)
build_test = bxl_main(
impl = _impl,
cli_args = {
"target": cli_args.target_label(),
},
)
def _impl_build_stats(ctx):
stats = {}
for target, value in ctx.build(ctx.cli_args.targets).items():
artifacts = value.artifacts()
failures = value.failures()
stats[target.raw_target()] = {
"artifacts": len(artifacts),
"failures": len(failures),
}
ctx.output.print_json(stats)
build_stats = bxl_main(
impl = _impl_build_stats,
cli_args = {
"targets": cli_args.target_expr(),
},
)
def _cquery_build(ctx):
# Query for targets first
universe = ctx.target_universe("...").target_set()
targets = ctx.cquery().kind("trivial_build", universe)
# Build queried targets
outputs = []
for value in ctx.build(targets).values():
outputs.extend(ctx.output.ensure_multiple(value.artifacts()))
ctx.output.print(sep = "\n", *outputs)
cquery_build_test = bxl_main(
impl = _cquery_build,
cli_args = {},
)

Real example: CLI Args BXL (/refs/buck2/tests/core/bxl/test_cli_data/cli_args.bxl):

# Buck2: BXL with comprehensive CLI argument support
def _impl(ctx):
ctx.output.print("bool_arg: " + repr(ctx.cli_args.bool_arg))
ctx.output.print("string_arg: " + repr(ctx.cli_args.string_arg))
ctx.output.print("int_arg: " + repr(ctx.cli_args.int_arg))
ctx.output.print("float_arg: " + repr(ctx.cli_args.float_arg))
ctx.output.print("optional: " + repr(ctx.cli_args.optional))
ctx.output.print("enum_type: " + repr(ctx.cli_args.enum_type))
ctx.output.print("target: " + repr(ctx.cli_args.target))
ctx.output.print("list: " + repr(ctx.cli_args.list_type))
cli_test = bxl_main(
impl = _impl,
cli_args = {
"bool_arg": cli_args.bool(),
"bool_arg_with_default": cli_args.bool(True),
"string_arg": cli_args.string("default"),
"int_arg": cli_args.int(),
"float_arg": cli_args.float(),
"optional": cli_args.option(cli_args.string()),
"enum_type": cli_args.enum(["a", "b"]),
"target": cli_args.target_label(),
"configured_target": cli_args.configured_target_label(),
"sub_target": cli_args.sub_target(),
"list_type": cli_args.list(cli_args.int()),
},
)
# Short flags support
cli_test_short = bxl_main(
impl = _impl_cli_test_short,
cli_args = {
"bool_arg": cli_args.bool(short = "b"),
"string_arg": cli_args.string(short = "s"),
"int_arg": cli_args.int(short = "i"),
"target": cli_args.target_label(short = "t"),
},
)
# JSON arguments
def _impl_cli_json_arg(ctx):
my_json = ctx.cli_args.my_json
_assert_eq(type(my_json["int"]), "int")
_assert_eq(type(my_json["string"]), "string")
_assert_eq(my_json["list"], [1, 2, 3])
cli_json_arg = bxl_main(
impl = _impl_cli_json_arg,
cli_args = {
"my-json": cli_args.json(short = "j"),
},
)

Real example: rust-analyzer Integration (/refs/buck2/prelude/rust/rust-analyzer/resolve_deps.bxl):

# Buck2: BXL for IDE integration (rust-analyzer)
load("@prelude//rust:link_info.bzl", "RustLinkInfo")
load("@prelude//rust/rust-analyzer:provider.bzl", "RustAnalyzerInfo")
TargetInfo = dict[str, typing.Any]
MacroOutput = record(
actual = TargetLabel,
dylib = Artifact,
)
ExpandedAndResolved = record(
expanded_targets = list[TargetLabel],
queried_proc_macros = dict[TargetLabel, MacroOutput],
resolved_deps = dict[TargetLabel, TargetInfo],
)
def materialize(ctx: bxl.Context, target: bxl.ConfiguredTargetNode) -> Artifact:
analysis = ctx.analysis(target)
sources = analysis.providers()[DefaultInfo].sub_targets["sources"][DefaultInfo].default_outputs[0]
return sources
def _process_target_config(
ctx: bxl.Context,
target: bxl.ConfiguredTargetNode,
analysis: bxl.AnalysisResult,
in_workspace: bool) -> TargetInfo:
target = target.unwrap_forward()
providers = analysis.providers()
ra_info = providers[RustAnalyzerInfo]
resolved_attrs = target.resolved_attrs_eager(ctx)
# Convert sources to absolute paths
srcs = list(resolved_attrs.srcs)
# Remove configured platform from deps
deps = [dep.label.raw_target() for dep in ra_info.rust_deps]
# Build target info for rust-analyzer
attrs = target.attrs_eager()
return {
"crate": ra_info.crate.simple,
"crate_root": ra_info.crate_root,
"deps": deps,
"edition": ra_info.edition,
"env": {k: cmd_args(v, delimiter = "") for k, v in ra_info.env.items()},
"features": ra_info.features,
"in_workspace": in_workspace,
"kind": target.rule_type,
"label": target.label.raw_target(),
"name": resolved_attrs.name,
"proc_macro": _get_nullable_attr(attrs, "proc_macro"),
"project_relative_buildfile": ctx.fs.project_rel_path(target.buildfile_path),
"rustc_flags": ra_info.rustc_flags,
"source_folder": materialize(ctx, target),
"srcs": srcs,
}
FeatureBazel AspectsBuck2 BXL
Primary purposeAugment build graphQuery and script builds
ExecutionDuring analysis phaseStandalone execution
Graph accessShadow graph parallel to targetsFull graph introspection
Can build targetsPart of normal buildYes, explicitly via ctx.build()
CLI argumentsVia rule attributesNative CLI parsing
IDE integrationYes (aspects generate data)Yes (primary use case)
Artifact accessThrough providersDirect materialization
Query capabilitiesLimited (via rule attrs)Full uquery/cquery/aquery
OutputProviders to consumersPrint, JSON, artifacts
ReusabilityAttach to rulesStandalone scripts

Skyframe is Bazel’s incremental evaluation framework.

Bazel Build Phases:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Loading │ -> │ Analysis │ -> │ Execution │
└─────────────┘ └─────────────┘ └─────────────┘
Parse BUILD Create actions Run actions
Evaluate macros Resolve configs Write outputs
Build target Check providers Cache results
graph

Key Skyframe characteristics:

  • SkyValues (nodes): Immutable objects containing build data
  • SkyFunctions: Build nodes based on keys and dependent nodes
  • Incremental invalidation: Precisely invalidates changed nodes
  • Parallel evaluation: Functions without dependencies run in parallel
  • Phase boundaries: Loading must complete before analysis, etc.

Starlark-java runtime features (unique to Bazel):

  • Mutability protocol: Thread isolation via try-with-resources pattern - values created within a Mutability scope are automatically frozen when the scope exits, preventing data races without explicit synchronization
  • Step counting: Execution can be limited via setMaxExecutionSteps() to prevent infinite loops - counts logical operations, not wall-clock time
  • CPU profiling hooks: Native integration for performance analysis at function call boundaries
  • GuardedValues: Bindings can be conditionally available based on semantic flags and client data

Buck2’s DICE (Distributed Incremental Computation Engine)

Section titled “Buck2’s DICE (Distributed Incremental Computation Engine)”

DICE is Buck2’s single-graph incremental computation system.

Buck2 Single Graph:
┌─────────────────────────────────────────────────────────┐
│ DICE Graph │
│ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │
│ │Parse │ → │Config│ → │Analyze│ → │Action│ → │Output│ │
│ └──────┘ └──────┘ └──────┘ └──────┘ └──────┘ │
│ All nodes in single dependency graph │
│ No phase boundaries - continuous parallelism │
└─────────────────────────────────────────────────────────┘

Key DICE characteristics:

  • No phases: Single incremental dependency graph
  • Continuous parallelism: Targets can transition through states in parallel
  • Precise invalidation: Changes invalidate only affected nodes
  • RE-first design: Remote execution integrated from the start
AspectBazel SkyframeBuck2 DICE
Graph structurePer-phase graphsSingle unified graph
ParallelismWithin phasesAcross all operations
InvalidationPer-phase + cross-phaseSingle invalidation pass
Memory modelJVM garbage collectionRust ownership (no GC)
ConfigurationTransitions between phasesModifiers (experimental)
Dynamic depsLimited (aspects)dynamic_output actions

Bazel restricts dynamic dependencies to maintain queryability:

# Bazel: Generate BUILD files externally, then run Bazel
# Or use aspects to gather info first
genrule(
name = "generated_build",
srcs = [":gather_info"], # Aspect output
outs = ["generated_BUILD"],
cmd = "python generate_build.py $< > $@",
)

Real example from Buck2 (/refs/buck2/prelude/http_archive/unarchive.bzl):

# Buck2: Dynamic output for processing archive contents
def unarchive(
ctx: AnalysisContext,
archive: Artifact,
output_name: str,
ext_type,
excludes,
strip_prefix,
exec_deps: HttpArchiveExecDeps,
prefer_local: bool,
sub_targets: list[str] | dict[str, list[str]]):
exclude_flags = []
exclude_hidden = []
if excludes:
# First action: list archive contents
exclusions = ctx.actions.declare_output(output_name + "_exclusions")
contents = ctx.actions.declare_output(output_name + "_contents")
tar_script, _ = ctx.actions.write(
"{}_listing.{}".format(output_name, ext),
[cmd_args(archive, format = "tar --list -f {} > " + first_param)],
is_executable = True,
allow_args = True,
)
ctx.actions.run(
cmd_args(interpreter + [tar_script, contents.as_output()], hidden = [archive]),
category = "process_exclusions",
)
# Dynamic output: create exclusion list after reading contents
def create_exclusion_list(ctx: AnalysisContext, artifacts, outputs):
# Read the contents file (artifact is now available)
files = artifacts[contents].read_string().splitlines()
exclusion_list = []
exclude_regexen = [regex(e) for e in excludes]
for f in files:
for exclusion in exclude_regexen:
if exclusion.match(f):
exclusion_list.append(f)
break
# Write the exclusion list
ctx.actions.write(outputs[exclusions], "\n".join(exclusion_list))
ctx.actions.dynamic_output(
dynamic = [contents], # Read these artifacts
inputs = [], # Additional inputs
outputs = [exclusions.as_output()], # Bind these outputs
f = create_exclusion_list, # Function to run
)
exclude_flags.append(cmd_args(exclusions, format = "--exclude-from={}"))
exclude_hidden.append(exclusions)
# Continue with unarchive action using exclusion list
output = ctx.actions.declare_output(output_name, dir = True)
# ... rest of implementation

Anonymous targets enable analysis-time graph modifications:

# Buck2: Anonymous targets for shared compilation
def _swift_library_impl(ctx: AnalysisContext) -> list[Provider]:
# Create anonymous target for shared module compilation
# Multiple libraries with same Swift module share this target
module_target = ctx.actions.anon_target(
swift_module_rule,
{
"module_name": ctx.attrs.module_name,
"sources": ctx.attrs.srcs,
},
)
# Wait for anonymous target analysis
module_info = module_target.artifact("module")
return [DefaultInfo(default_output = module_info)]

Both systems implement the Remote Execution API:

// Shared protocol (simplified)
service Execution {
rpc Execute(ExecuteRequest) returns (Operation);
rpc WaitExecution(WaitExecutionRequest) returns (Operation);
}
service ContentAddressableStorage {
rpc FindMissingBlobs(FindMissingBlobsRequest) returns (FindMissingBlobsResponse);
rpc BatchUpdateBlobs(BatchUpdateBlobsRequest) returns (BatchUpdateBlobsResponse);
rpc BatchReadBlobs(BatchReadBlobsRequest) returns (BatchReadBlobsResponse);
}
service ActionCache {
rpc GetActionResult(GetActionResultRequest) returns (ActionResult);
rpc UpdateActionResult(UpdateActionResultRequest) returns (ActionResult);
}
AspectBazelBuck2
Design philosophyRE added to existing systemRE-first from inception
Local executionPrimary mode, RE optionalSpecial case of RE
Directory hashingComputed when neededPre-computed for RE
Build Event ProtocolYes (BEP)No (uses BuckEvent)
Remote persistent workersSupportedLocal workers only
Client configurabilityExtensive tuning optionsLess configurable
HermeticitySandbox available locallyRE enforces, local unsandboxed

Both systems work with:


Real example from Bazel (/refs/bazel/tools/build_rules/test_rules.bzl):

# Bazel: Test rule utilities
def success_target(ctx, msg, exe = None):
"""Return a success for an analysis test."""
exe = exe or ctx.outputs.executable
ctx.actions.write(
output = exe,
content = "#!/bin/bash\ncat <<'__eof__'\n" + msg + "\n__eof__\necho",
is_executable = True,
)
return [DefaultInfo(files = depset([exe]))]
def failure_target(ctx, msg, exe = None):
"""Return a failure for an analysis test."""
exe = exe or ctx.outputs.executable
ctx.actions.write(
output = exe,
content = "#!/bin/bash\ncat >&2 <<'__eof__'\n" + msg + "\n__eof__\nexit 1",
is_executable = True,
)
return [DefaultInfo(files = depset([exe]))]
def _rule_test_rule_impl(ctx):
"""Check that a rule generates the desired outputs and providers."""
rule_ = ctx.attr.rule
rule_name = str(rule_.label)
exe = ctx.outputs.out
if ctx.attr.generates:
# Verify generated files
generates = sorted(ctx.attr.generates)
generated = sorted([
strip_prefix(prefix, f.short_path)
for f in rule_.files.to_list()
])
if generates != generated:
fail("rule %s generates %s not %s" %
(rule_name, repr(generated), repr(generates)))
if ctx.attr.provides:
# Verify providers exist
files = []
commands = []
for k in ctx.attr.provides.keys():
if hasattr(rule_, k):
v = repr(getattr(rule_, k))
else:
fail("rule %s doesn't provide attribute %s" % (rule_name, k))
# ... create verification script
return success_target(ctx, "success", exe = exe)
_rule_test_rule = rule(
attrs = {
"rule": attr.label(mandatory = True),
"generates": attr.string_list(),
"provides": attr.string_dict(),
"out": attr.output(),
},
implementation = _rule_test_rule_impl,
)
def rule_test(name, rule, generates = None, provides = None, **kwargs):
"""Macro to test rule outputs and providers."""
_rule_test_rule(
name = name + "_impl",
rule = rule,
generates = generates,
provides = provides,
out = name + ".sh",
testonly = 1,
visibility = ["//visibility:private"],
)
sh_test(
name = name,
srcs = [name + "_impl"],
data = [name + "_impl"],
deps = [BASH_RUNFILES_DEP],
**kwargs
)
def file_test(name, file, content = None, regexp = None, matches = None, **kwargs):
"""Test that a file has given content or matches a pattern."""
_file_test_rule(
name = name + "_impl",
file = file,
content = content or "",
regexp = regexp or "",
matches = matches if matches != None else -1,
out = name + "_impl.sh",
testonly = 1,
)
sh_test(name = name, srcs = [name + "_impl"], **kwargs)
# Buck2: Test rule with providers
cxx_test(
name = "mylib_test",
srcs = ["mylib_test.cc"],
deps = [
":mylib",
"//third_party:gtest",
],
# Buck2 test-specific attributes
contacts = ["oncall+myteam@example.com"],
labels = ["unit", "fast"],
env = {
"TEST_DATA_DIR": "$(location :test_data)",
},
)
# Using BXL for test introspection
def _test_discovery_impl(ctx):
"""Discover all tests matching a pattern."""
universe = ctx.target_universe(ctx.cli_args.pattern).target_set()
tests = ctx.cquery().kind(".*_test", universe)
test_info = []
for test in tests:
analysis = ctx.analysis(test)
providers = analysis.providers()
test_info.append({
"label": str(test.label),
"type": test.rule_type,
"deps": len(providers[DefaultInfo].default_outputs),
})
ctx.output.print_json(test_info)
test_discovery = bxl_main(
impl = _test_discovery_impl,
cli_args = {
"pattern": cli_args.string("..."),
},
)

Terminal window
# Bazel: Query commands
bazel query "deps(//my:target)" # All dependencies
bazel query "rdeps(//..., //my:target)" # Reverse dependencies
bazel query "kind(cc_library, //...)" # Filter by rule kind
bazel query "attr(visibility, public, //...)" # Filter by attribute
# Configured query (analysis phase)
bazel cquery "deps(//my:target)" --output=jsonproto
# Action query
bazel aquery "//my:target" --output=jsonproto
bazel aquery "mnemonic(CppCompile, //my:target)"
# Build graph visualization
bazel query "deps(//my:target)" --output=graph | dot -Tpng > graph.png
Terminal window
# Buck2: Query commands
buck2 uquery "deps(//my:target)" # Unconfigured query
buck2 cquery "deps(//my:target)" # Configured query
buck2 aquery "//my:target" # Action query
# BXL for advanced queries
buck2 bxl //queries:analysis.bxl:providers_test
buck2 bxl //queries:build.bxl:build_test -- --target //my:target
# Target universe queries in BXL
buck2 bxl //my.bxl:find_tests -- --pattern "//src/..."

BXL query example (/refs/buck2/tests/core/bxl/test_target_universe_data/target_universe.bxl):

def _target_universe_test(ctx):
pattern = "some_cell//:inner"
target_universe = ctx.target_universe(pattern)
# Direct targets used to construct universe
direct_target_set = target_universe.target_set()
# All targets in the universe (including transitive)
universe_target_set = target_universe.universe_target_set()
# Query within the universe
libraries = ctx.cquery().kind("^library$", universe_target_set)
ctx.output.print("Direct: {}".format(len(direct_target_set)))
ctx.output.print("Universe: {}".format(len(universe_target_set)))
ctx.output.print("Libraries: {}".format(len(libraries)))
target_universe_test = bxl_main(
impl = _target_universe_test,
cli_args = {},
)
CapabilityBazelBuck2
Unconfigured querybazel querybuck2 uquery
Configured querybazel cquerybuck2 cquery
Action querybazel aquerybuck2 aquery
Programmatic queryExternal toolsBXL scripts
Build + querySeparate commandsBXL ctx.build()
Artifact inspectionbazel runBXL ctx.output.ensure()
Custom output format--output=jsonprotoBXL ctx.output.print_json()

# Bazel: Configuration transition
def _platform_transition_impl(settings, attr):
return {
"//command_line_option:cpu": attr.target_cpu,
"//command_line_option:compilation_mode": "opt",
}
platform_transition = transition(
implementation = _platform_transition_impl,
inputs = [],
outputs = [
"//command_line_option:cpu",
"//command_line_option:compilation_mode",
],
)
# Split transition (1:N) for fat binaries
def _fat_binary_transition_impl(settings, attr):
return {
"arm64": {"//command_line_option:cpu": "arm64"},
"x86_64": {"//command_line_option:cpu": "x86_64"},
}
fat_binary_transition = transition(
implementation = _fat_binary_transition_impl,
inputs = [],
outputs = ["//command_line_option:cpu"],
)
# Apply to rule
my_rule = rule(
implementation = _impl,
cfg = platform_transition,
attrs = {
"deps": attr.label_list(cfg = "target"),
"tools": attr.label_list(cfg = "exec"),
"_allowlist_function_transition": attr.label(
default = "@bazel_tools//tools/allowlists/function_transition_allowlist",
),
},
)
# Buck2: Platform-aware configuration with select
cxx_library(
name = "my_lib",
srcs = ["common.cpp"] + select({
"config//os:linux": ["linux.cpp"],
"config//os:macos": ["macos.cpp"],
"config//os:windows": ["windows.cpp"],
}),
compiler_flags = select({
"config//build:debug": ["-g", "-O0"],
"config//build:release": ["-O3", "-DNDEBUG"],
}),
# Platform-specific deps
deps = select({
"config//os:linux": ["//third_party:pthread"],
"DEFAULT": [],
}),
)
# Toolchain configuration
# toolchains/BUCK
load("@prelude//toolchains:cxx.bzl", "system_cxx_toolchain")
system_cxx_toolchain(
name = "cxx",
compiler_type = "clang",
cxx_flags = select({
"config//build:debug": ["-g"],
"config//build:release": ["-O3"],
}),
visibility = ["PUBLIC"],
)

# Bazel: Error in rule implementation
def _my_rule_impl(ctx):
if not ctx.files.srcs:
fail("srcs cannot be empty for rule {}".format(ctx.label))
# Check provider requirements
for dep in ctx.attr.deps:
if MyInfo not in dep:
fail("Dependency {} does not provide MyInfo".format(dep.label))
ERROR: /path/to/BUILD:10:8: in my_rule rule //pkg:target:
Traceback (most recent call last):
File "/path/to/rules.bzl", line 5, column 9, in _my_rule_impl
fail("srcs cannot be empty for rule {}".format(ctx.label))
Error in fail: srcs cannot be empty for rule //pkg:target
# Buck2: Error handling with expect utility
load("@prelude//utils:expect.bzl", "expect")
def _my_rule_impl(ctx: AnalysisContext) -> list[Provider]:
expect(
ctx.attrs.srcs,
"srcs cannot be empty for rule {}".format(ctx.label)
)
# Type checking is automatic
# If ctx.attrs.src is wrong type, runtime error with stack trace
# Custom error handlers for actions
def _generate_error_handler(category: str, errorformats: list[str] | None):
def handler(ctx: ActionErrorCtx) -> list[ActionSubError]:
if errorformats != None:
return ctx.parse_with_errorformat(
category = category,
error = ctx.stderr,
errorformats = errorformats,
)
return []
return handler

Key Migration Steps

  1. File renaming: BUILD -> BUCK, keep .bzl extension
  2. Provider updates:
    • Replace depset with transitive sets
    • Add type annotations
    • Use provider_field() for typed fields
  3. Rule API changes:
    • ctx.attr -> ctx.attrs
    • ctx.actions.declare_file() -> ctx.actions.declare_output()
    • Add .as_output() markers
    • Use cmd_args() for command building
  4. Convert aspects to BXL for IDE/tooling integrations
  5. Update toolchain references to explicit targets

Key Migration Steps

  1. File renaming: BUCK -> BUILD
  2. Type system downgrades:
    • Replace record with dicts
    • Replace enum with string constants
    • Remove union types
  3. Provider updates:
    • Convert transitive sets to depsets
    • Remove provider_field() type annotations
  4. Rule API changes:
    • ctx.attrs -> ctx.attr
    • Remove .as_output() markers
    • Convert cmd_args() to ctx.actions.args()
  5. Rewrite BXL as aspects + external tools

CategoryResources
Documentationbazel.build - comprehensive
Rules reposrules_cc, rules_java, rules_python, rules_go, rules_rust, etc.
CommunityBazel Slack, GitHub Discussions
IDE SupportIntelliJ plugin, VSCode extension, multiple LSPs
EnterpriseEngFlow, Aspect, BuildBuddy (commercial support)
CategoryResources
Documentationbuck2.build - growing
PreludeSingle comprehensive standard library
CommunityGitHub Issues, Discord
IDE SupportBXL-based (rust-analyzer, compilation DB)
EnterpriseMeta internal, emerging community