DEEP

LEARNING

VISUAL

SPACE

SIMULATION

SYSTEM

DLVS3 is revolutionizing satellite pose estimation and VBN landing navigation through advanced synthetic data generation and deep learning techniques. Our cutting-edge system bridges the domain gap in visual-based deep learning solutions for space activities, enabling safer and more efficient autonomous spacecraft operations.

Our Mission

Our mission is to enhance the safety, efficiency, and success of autonomous proximity operations in next-generation satellites. By generating photorealistic synthetic data and providing a comprehensive simulation environment, DLVS3 enables the development and testing of robust deep learning models for space applications.

Extension to lunar landing:

Rendering and simulation engine in a nutshell

Our advanced multipass rendering pipeline combines the power of real-time and offline rendering techniques to create photorealistic space environments:

Engines

  • Real-time rendering

    Utilizing Unreal Engine 5, we achieve 30+ fps for full Earth and Moon rendering, enabling interactive scenario testing and rapid iteration.

  • Offline rendering

    For ultra-high fidelity images, we employ path tracing techniques using SideFX Houdini, capturing complex light interactions and material properties.

Automatic annotation for training

  • Object masks for semantic segmentation

    Mask images generated for training and testing neural network solutions

  • Navigation lists

    All stars / solar system bodies can listed on an image with subpixel precise positioning

  • 3D markers

    Important real or theoretical positions can be calculated and logged

  • 6+ dof pose parameters

    Calculated pose parameters relative to the camera. Satellites handled as an articulating system with movable parts like antennas, solar panels, etc.

  • Advanced regression masks

    Ground-truth camera-based surface normal map, depth map and dense pose masks are available for advanced training

Features

  • Accurate celestial body positioning using SPICE kernels

    Leverages NASA's SPICE system for high-precision celestial body positioning.

  • PBR (Physically Based Rendering) materials

    DLVS3 employs advanced Physically Based Rendering (PBR) materials to achieve photorealistic results in satellite and space environment simulations.

  • Multi lightsource handling

    Uses sophisticated multi-light source system to accurately simulate the complex lighting conditions in space with the Sun and secondary reflecting sources like Earth, Moon, Chaser satellite, etc.

  • Atmospheric scattering simulation

    Implements a physically accurate atmospheric scattering model to realistically render Earth's atmosphere with Raleigh and Mie scattering

  • Lighting effects

    Raytraced lights and shadows, translucent clouds, luminescence, lighting strikes, aurora effects, night side city lights

  • Dynamic cloud generation and movement

    Clouds are generated procedurally using advanced noise algorithms, allowing for infinite variation and real-time adjustments.

  • Camera effects

    Selectable objectives and sensors, light reflections, vignetting, sensor noise simulation, dust and particle overlays

HST2 grayscale
Articulated high-resolution HST model in DLVS3

We create high-end training material for camera-based deep-learning solutions.

  • Real data practically does not exist
  • DOE* data is too expensive or simply unavailable
 

We provide a real-time simulation environment for testing purposes.

  • No matter how good something is in theory, it fails at the first encounter
 

You need better data with smaller, more efficient on-board models.

In space, unique lighting conditions offer exciting opportunities for enhancing visual recognition systems. By leveraging this distinct environment, we can train customized models that excel in the absence of atmospheric interference. While traditional visual foundation model approaches rely on vast internet-based photo databases, our innovative approach focuses on harnessing high-caliber synthetic images to achieve unparalleled accuracy. Furthermore, by optimizing network architecture and power consumption, we can ensure efficient operation even in resource-constrained settings like satellite onboard inference on RHBD hardware. This cutting-edge strategy holds immense potential for revolutionizing visual based navigation in extraterrestrial environments.

DEEP DIVE

PBR (Physically Based Rendering) materials

DLVS3 employs advanced Physically Based Rendering (PBR) materials to achieve photorealistic results in satellite and space environment simulations. Key aspects include:

  • Metallic-Roughness Workflow: DLVS3 utilizes the industry-standard metallic-roughness PBR workflow, which defines material properties using base color, metallic, and roughness maps.
  • Material Library: A comprehensive library of space-specific materials has been developed, including:
    • Anodized aluminum with varying levels of wear
    • Multi-layer insulation (MLI) with realistic creasing and light interaction
    • Solar panel materials with accurate spectral responses
    • Specialized coatings used in space applications, such as thermal control paints
  • Bidirectional Reflectance Distribution Function (BRDF): DLVS3 implements physically accurate BRDFs to model how light interacts with different material surfaces, crucial for capturing the complex reflective properties of spacecraft materials.
  • Energy Conservation: The rendering system ensures energy conservation, maintaining physical accuracy in light interactions across different materials.
  • Anisotropic Reflections: For materials like brushed metals commonly found on spacecraft, DLVS3 supports anisotropic reflections to accurately represent directional reflectivity.
HST Earth
PBR materials with anisotropic reflections in DLVS3

Multi-light source handling

DLVS3 incorporates a sophisticated multi-light source system to accurately simulate the complex lighting conditions in space:

  • Sun Simulation: The sun is modeled as a directional light source with accurate size, position, and intensity.
  • Earth Albedo: Earth’s albedo is simulated using a dynamic, spherical area light that accounts for:
    • Varying reflectivity based on surface features (oceans, land masses, ice caps)
    • Atmospheric scattering effects
    • Time-of-day and seasonal variations
  • Lunar Illumination: When relevant, the moon is modeled as an additional light source, taking into account its phase and position relative to the spacecraft and Earth.
  • Artificial Lights: DLVS3 can simulate various artificial light sources, including:
    • Spacecraft-mounted lights (e.g., docking lights, inspection lights)
    • Light pollution from Earth’s cities for low Earth orbit scenarios
    • Other nearby spacecraft or space stations
  • Global Illumination: The system employs advanced global illumination techniques to accurately model light bounces and indirect lighting, crucial for capturing the nuanced lighting conditions in space.

Dynamic cloud generation and movement

DLVS3 features a dynamic cloud system for Earth simulations:

  • Procedural Generation: Clouds are generated procedurally using advanced noise algorithms, allowing for infinite variation and real-time adjustments.
  • Multi-Layer System: The cloud system employs multiple layers to represent different cloud types:
    • Low-altitude cumulus and stratus clouds
    • Mid-level altocumulus and altostratus
    • High-altitude cirrus clouds
  • Physical Properties: Each cloud layer simulates physical properties including:
    • Density variations
    • Light scattering and absorption
    • Self-shadowing and ground shadow casting
  • Global Weather Patterns: Cloud movement and formation are influenced by simulated global weather patterns, taking into account factors like prevailing winds and temperature variations.
  • Temporal Evolution: The cloud system evolves over time, simulating the formation, dissipation, and movement of cloud formations for extended simulation periods.
FlyRiverDelta
Sunset over Fly river delta

Accurate celestial body and spacecraft positioning using SPICE kernels

DLVS3 leverages NASA’s SPICE (Spacecraft Planet Instrument C-matrix Events) system for high-precision celestial body positioning:

  • SPICE Integration: The system directly integrates SPICE kernels, allowing for accurate positioning of planets, moons, and spacecraft within the solar system.
  • High-Precision Ephemerides: DLVS3 uses the latest high-precision ephemerides data from SPICE, ensuring accurate positions for all major celestial bodies.
  • Time Synchronization: The simulation’s time system is synchronized with SPICE’s internal time representation, allowing for precise temporal alignment across all simulated elements.
  • Custom Spacecraft Trajectories: DLVS3 can incorporate custom SPICE kernels for specific spacecraft missions, enabling accurate simulation of historical, current, and planned space operations.
  • Coordinate Frame Transformations: The system utilizes SPICE’s robust coordinate frame transformation capabilities, allowing seamless switching between different reference frames (e.g., J2000, ICRF, TEME, body-fixed frames).
  • Planetary Rotation Models: Accurate planetary rotation models are implemented using SPICE data, crucial for precise surface feature positioning in planetary approach and landing scenarios.
MoonEarth DSCVR like
Earth - Moon updated reflection model and albedo contrast similar to DSCOVR satellite images on 16 July, 2015

6D+ pose estimation using RGB(D)

It refers to determining an object’s six+ degree-of-freedom (6D+) pose in 3D space based on RGB(D) images. This involves estimating the position and orientation of an articulated object with several joints in a scene and is a fundamental problem in computer vision and robotics. The optional depth channel can be obtained with various technologies, such as time-of-flight (ToF) sensors, structured light sensors, or stereo camera setups.

HubbleSolution
HubbleM scaled
RealHubbleTest

Semantic segmentation

Semantic segmentation is a computer vision task in which the goal is to categorize each pixel in an image into a class and produce a dense pixel-wise segmentation map of the entire image this way. It can help identify objects, their properties, and their relationships within the scene. Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU) and Pixel Accuracy metrics.

Our test pose estimation dataset  will be released in Q2 2025

The dataset will be freely available to any entity. Stay tuned!

DLVS3 4DLVS3 SPEED+ SHIRT SPADES RAPTOR SPARK URSO
Random poses
Trajectory
Ground truth
Keypoints
Semantic Segmentation
Depthmap
Normal map
Multiple lights
Number of images

1.000.000

70.000
4.700
TBC
120.000
150.000
15.000
High dyanmic range

Back to the Moon

A new start after 50 years. This time using tools boosted with artificial intelligence…

moon transformed2

Extension to lunar landing:

Project timeline

Ray-tracing offline render

  • 2024

    LEO/MEO/GEO/HEO orbits

  • 2025

    Moon orbits (100 km+)

  • 2026

    Moon surface

Realtime render

  • 2024

    GEO/HEO orbits

  • 2025

    MEO orbits, Moon orbits (100 km+)

  • 2026

    LEO Orbits, Moon surface

API and Simulation

  • 2024

    MInD interoperability, n-body simulation

  • 2025

    Object library, modding, scene definition, compact AI solutions on edge

  • 2026

    Test environment for 3rd party solutions