vllm.utils.flashinfer ¶
Compatibility wrapper for FlashInfer API changes.
Users of vLLM should always import only these wrappers.
_flashinfer_concat_mla_k ¶
Custom op wrapper for flashinfer's concat_mla_k.
This is an in-place operation that concatenates k_nope and k_pe into k.
The kernel is optimized for DeepSeek V3 dimensions: - num_heads=128 - nope_dim=128 - rope_dim=64
Key optimizations: - Warp-based processing with software pipelining - Vectorized memory access (int2 for nope, int for rope) - L2 prefetching for next row while processing current - Register reuse for rope values across all heads
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
k | Tensor | Output tensor, shape [num_tokens, num_heads, nope_dim + rope_dim]. Modified in-place. | required |
k_nope | Tensor | The nope part of k, shape [num_tokens, num_heads, nope_dim]. | required |
k_pe | Tensor | The rope part of k (shared), shape [num_tokens, 1, rope_dim]. This is broadcast to all heads. | required |
Source code in vllm/utils/flashinfer.py
_get_submodule ¶
Safely import a submodule and return it, or None if not available.
_lazy_import_wrapper ¶
_lazy_import_wrapper(
module_name: str,
attr_name: str,
fallback_fn: Callable[..., Any] = _missing,
)
Create a lazy import wrapper for a specific function.
Source code in vllm/utils/flashinfer.py
_missing ¶
Placeholder for unavailable FlashInfer backend.
Source code in vllm/utils/flashinfer.py
can_use_trtllm_attention ¶
Check if the current configuration supports TRTLLM attention.
Source code in vllm/utils/flashinfer.py
flashinfer_mm_mxfp8 ¶
flashinfer_mm_mxfp8(
a: Tensor,
b: Tensor,
block_scale_a: Tensor,
block_scale_b: Tensor,
out_dtype: dtype,
backend: str = "cutlass",
) -> Tensor
MXFP8 MM helper - mirrors flashinfer_scaled_fp4_mm API.
Takes non-transposed weights and handles transpose internally.
CRITICAL: mm_mxfp8 CUTLASS kernel requires SWIZZLED 1D scales for optimal performance and accuracy. Both input and weight scales should be in swizzled format from FlashInfer's mxfp8_quantize(is_sf_swizzled_layout=True).
Source code in vllm/utils/flashinfer.py
force_use_trtllm_attention ¶
force_use_trtllm_attention() -> bool | None
This function should only be called during initialization stage when vllm config is set. Return None if --attention-config.use_trtllm_attention is not set, return True if TRTLLM attention is forced to be used, return False if TRTLLM attention is forced to be not used.
Source code in vllm/utils/flashinfer.py
has_flashinfer cached ¶
has_flashinfer() -> bool
Return True if flashinfer-python package is available.
Source code in vllm/utils/flashinfer.py
has_flashinfer_all2all cached ¶
has_flashinfer_all2all() -> bool
Return True if FlashInfer mnnvl all2all is available.
Source code in vllm/utils/flashinfer.py
has_flashinfer_comm cached ¶
has_flashinfer_comm() -> bool
Return True if FlashInfer comm module is available.
has_flashinfer_cubin cached ¶
has_flashinfer_cubin() -> bool
Return True if flashinfer-cubin package is available.
Source code in vllm/utils/flashinfer.py
has_flashinfer_cutedsl cached ¶
has_flashinfer_cutedsl() -> bool
Return True if FlashInfer cutedsl module is available.
has_flashinfer_cutedsl_grouped_gemm_nt_masked cached ¶
has_flashinfer_cutedsl_grouped_gemm_nt_masked() -> bool
Return True if FlashInfer CUTLASS fused MoE is available.
Source code in vllm/utils/flashinfer.py
has_flashinfer_cutlass_fused_moe cached ¶
has_flashinfer_cutlass_fused_moe() -> bool
Return True if FlashInfer CUTLASS fused MoE engine is available.
Only checks for the core CUTLASS MoE entry point. FP4-specific utilities (fp4_quantize, nvfp4_block_scale_interleave) are checked separately via has_flashinfer_nvfp4() and gated by _supports_quant_scheme(). This allows FP8 CUTLASS MoE to work on architectures like SM121 (GB10) that have cutlass_fused_moe but may lack FP4 utilities.
Source code in vllm/utils/flashinfer.py
has_flashinfer_fp8_blockscale_gemm cached ¶
has_flashinfer_fp8_blockscale_gemm() -> bool
Return True if FlashInfer block-scale FP8 GEMM is available.
Source code in vllm/utils/flashinfer.py
has_flashinfer_moe cached ¶
has_flashinfer_moe() -> bool
Return True if FlashInfer MoE module is available.
has_flashinfer_nvfp4 cached ¶
has_flashinfer_nvfp4() -> bool
Return True if FlashInfer NVFP4 quantization utilities are available.
Checks for fp4_quantize and nvfp4_block_scale_interleave which are required for NVFP4 quantization paths but not for FP8 CUTLASS MoE.
Source code in vllm/utils/flashinfer.py
has_flashinfer_trtllm_fused_moe cached ¶
has_flashinfer_trtllm_fused_moe() -> bool
Return True if FlashInfer TRTLLM fused MoE is available.
Source code in vllm/utils/flashinfer.py
has_nvidia_artifactory cached ¶
has_nvidia_artifactory() -> bool
Return True if NVIDIA's artifactory is accessible.
This checks connectivity to the kernel inference library artifactory which is required for downloading certain cubin kernels like TRTLLM FHMA.
Source code in vllm/utils/flashinfer.py
is_flashinfer_fp8_blockscale_gemm_supported cached ¶
is_flashinfer_fp8_blockscale_gemm_supported() -> bool
Return True if FlashInfer block-scale FP8 GEMM is supported.
Source code in vllm/utils/flashinfer.py
supports_trtllm_attention cached ¶
supports_trtllm_attention() -> bool
TRTLLM attention is supported if the platform is SM100/SM103, NVIDIA artifactory is accessible, and batch-invariant mode is not enabled.
Note: TRTLLM attention kernels are NOT supported on SM12x (GB10). FlashInfer's benchmark matrix confirms trtllm-native is only available for SM10.0/10.3 (B200/GB200), not SM12.0/12.1 (GB10). SM12x devices should fall back to other attention backends (FA2, cuDNN, etc.).
Source code in vllm/utils/flashinfer.py
use_trtllm_attention ¶
use_trtllm_attention(
num_qo_heads: int,
num_kv_heads: int,
num_tokens: int,
max_seq_len: int,
dcp_world_size: int,
kv_cache_dtype: str,
q_dtype: dtype,
is_prefill: bool,
force_use_trtllm: bool | None = None,
has_sinks: bool = False,
has_spec: bool = False,
) -> bool
Return True if TRTLLM attention is used.
Source code in vllm/utils/flashinfer.py
343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 | |