Live Demos

See It in Action

Real-world detection and tracking demos showing the LVS-250's capabilities across varying conditions. Same code runs on your dev kit.

Detection Demos

The same code powering these demos runs on your development machine today. Deploy to LVS-250 silicon for production performance.

Click to expand

Daylight Detection

Multi-class object detection in daylight conditions

Real-time detection and tracking of vehicles, personnel, and objects with high confidence scores. Detection, segmentation, and pose estimation run in parallel on the dual NPU cores.

4K @ 60fps processing
< 10ms end-to-end latency
Multi-model parallel execution
95%+ detection accuracy

Technical: Running YOLOv8-nano optimized for LVS-250. All pre/post-processing handled by model-optimized APIs.

Click to expand

Low-Light / IR Detection

Enhanced detection in challenging lighting

Advanced detection capabilities in low-light and infrared spectrum for 24/7 operations. Thermal fusion and IR enhancement enable reliable detection when visible light fails.

Thermal sensor fusion
IR spectrum enhancement
Night vision support
Adaptive gain control

Technical: Same inference pipeline as daylight demo. LVS Vision Library handles sensor-specific preprocessing automatically.

Behind the Demos

What makes these demos possible, and how you can achieve the same results.

Multi-Model Execution

Run detection, tracking, and classification simultaneously. The dual NPU architecture handles multiple models in parallel without performance degradation.

Automatic Pre/Post-Processing

All image preprocessing (letterboxing, NMS, coordinate scaling) handled automatically by model-optimized APIs. Focus on your application logic.

Power-Performance Trade-offs

Choose between increased FPS for faster detection or reduced power consumption for extended mission duration. Critical for edge deployment.

Same Code, Any Target

Develop on your laptop, deploy to LVS-250 silicon. The SDK handles all hardware abstraction, ensuring your code runs identically on both.

demo_detection.py
# Initialize LVS-250 device
device = lvs.connect()

# Load optimized model (handles all preprocessing)
model = lvs.Model.load("yolov8-nano-defense")

# Create inference pipeline
pipeline = device.create_pipeline(
    model=model,
    input_source="mipi-csi",
    fps=60
)

# Run real-time inference
for frame in pipeline.stream():
    for det in frame.detections:
        print(f"{det.class_name}: {det.confidence:.2f}")

Ready to Transform Your Edge AI Capabilities?

Schedule a demo to see the LVS-250 in action. Our team will show you how next-generation edge AI can accelerate your mission.

Questions? Email us at info@lolavisionsystems.com