Llama-2-7b-HiCI-16k

Model Description

This is a HiCI adapter checkpoint for Llama-2-7B, extending its context window to 16K tokens. It contains three components: LoRA adapters (q/k/v/o_proj), HiCI module weights (LocalConstructor + GlobalIntegrator), and fine-tuned embedding + LayerNorm weights.

Paper: HiCI (arXiv 2603.20843)

HiCI Architecture

Three-stage hierarchy per transformer layer:

  1. Local Construction β€” M learnable query slots attend to each segment via bottleneck cross-attention β†’ local summary L_i
  2. Global Integration β€” multi-view statistics (mean/max/min/std/β„“2-norm) β†’ shared compression β†’ attention-based selection β†’ gated expansion β†’ G
  3. Top-down Broadcast β€” per-segment attention with augmented KV=[G, L_i, segment tokens]; queries from segment tokens only
Input (16K tokens) β†’ 4 segments Γ— 4K
  Stage 1: 8 local slots per segment β†’ L_i
  Stage 2: multi-view stats β†’ K=4 global slots G
  Stage 3: Q=[chunk], KV=[G, L_i, chunk] β†’ Flash Attention

Trainable Components

adapter_model.bin  (27 MB)
└── LoRA Adapters (r=8, alpha=16): q_proj, k_proj, v_proj, o_proj

trainable_params.bin  (~2 GB)
β”œβ”€β”€ local_constructor.*            β€” Local Construction modules (32 layers)
β”œβ”€β”€ global_integrator.*  β€” Global Integration modules (32 layers)
β”œβ”€β”€ input_layernorm / post_attention_layernorm β€” LayerNorm weights (32 layers)
β”œβ”€β”€ model.embed_tokens.weight  β€” Token embeddings
└── model.norm.weight          β€” Final LayerNorm

Training Details

  • Base Model: meta-llama/Llama-2-7b-hf
  • Context Length: 16,384 tokens (16K)
  • Segments: 4 Γ— 4,096 tokens
  • Local Representation Slots (M): 8 per segment
  • Global Representation Slots (K): 4
  • HiCI Attention Heads: 8, Bottleneck dim: 512, Shared compress dim: 128
  • LoRA: r=8, alpha=16, target: q/k/v/o_proj
  • Checkpoint: step 1000
  • Batch: per_device=1, grad_accum=8 (effective batch=8)
  • LR: 2e-5 (base/LoRA), 2e-4 (HiCI modules), grad clip=0.3
  • Precision: bf16
  • Hardware: 8Γ— H100 80GB, DeepSpeed Stage 2

Usage

Requires llama_attn_hici.py from this repo.

import torch
import transformers
from peft import PeftModel
import llama_attn_hici as hici_attn

# 1. Replace attention with HiCI BEFORE loading model
hici_attn.MIXED_GROUP_TRAINING = False
hici_attn.replace_llama_attn(use_flash_attn=True, use_full=False, use_hierarchical_forward=True)

# 2. Load base model
base_model = transformers.AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf", torch_dtype=torch.bfloat16, device_map="auto",
)

# 3. Register HiCI modules (must match training config)
hici_attn.register_hici_to_model(base_model, num_memory_slots=8, global_slots=4, num_heads=8, bottleneck_dim=512)

# 4. Load LoRA adapter + trainable_params
model = PeftModel.from_pretrained(base_model, "ZengXiangyu/Llama-2-7b-HiCI-16k")

# 5. Tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained("ZengXiangyu/Llama-2-7b-HiCI-16k")

Citation

@article{zeng2026hici,
  title={HiCI: Hierarchical Construction-Integration for Long-Context Attention},
  author={Zeng, Xiangyu and Xu, Qi and Wang, Yunke and Xu, Chang},
  journal={arXiv preprint arXiv:2603.20843},
  year={2026}
}

License

This model follows the Llama 2 Community License.

Downloads last month
14
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ZengXiangyu/Llama-2-7b-HiCI-16k

Adapter
(2358)
this model

Paper for ZengXiangyu/Llama-2-7b-HiCI-16k