DLVS3

 

DEEP LEARNING VISUAL SPACE SIMULATION SYSTEM

Objective

Availability of quality synthetic data and a pipelined testbed for training deep learning models for different types of space activities based on visual inspections.

We create high-end training material for camera-based deep-learning solutions.

  • Real data practically does not exist
  • DOE* data is too expensive or simply unavailable
 

We provide a real-time simulation environment for testing purposes.

  • No matter how good something is in theory, it fails at the first encounter
 

You need better data with smaller, more efficient on-board models.

In space, unique lighting conditions offer exciting opportunities for enhancing visual recognition systems. By leveraging this distinct environment, we can train customized models that excel in the absence of atmospheric interference. While traditional visual foundation model approaches rely on vast internet-based photo databases, our innovative approach focuses on harnessing high-caliber synthetic images to achieve unparalleled accuracy. Furthermore, by optimizing network architecture and power consumption, we can ensure efficient operation even in resource-constrained settings like satellite onboard inference. This cutting-edge strategy holds immense potential for revolutionizing visual based navigation in extraterrestrial environments.

Key features

Physics simulation

  • N-body solver

    Standard n-body and extended-body gravity calculations

  • Atmospheric lighting effects

    Raytraced lights and shadows, translucent clouds, luminescence, lighting strikes, aurora effects

  • Camera effects

    Selectable objectives and sensors, light reflections, vignetting, sensor noise simulation, dust and particle overlays

Automatic annotation

  • Object masks for semantic segmentation

    Mask images generated for training and testing neural network solutions

  • Navigation lists

    All stars / solar system bodies can listed on an image with subpixel precise positioning

  • 3D markers

    Important real or theoretical positions can be calculated and logged

  • 6-dof pose parameters

    Calculated pose parameters relative to the camera

Feature rich API

  • Scene definition

    Setting up the scenario

  • Object library

    Upload/download static and dynamic 3d models

  • Modding

    Use your own satellite navigation and control simulation and/or complete scene object control

  • MInD DatabaseService interoperability

    Common annotation scheme for semantic segmentation, object detection, nonlinear regression, single/multilable decisions

  • MInD BrainService inference

    Masking, 6-dof pose estimation, deployment to on-premise and edge devices

6D pose estimation using RGB(D)

It refers to determining an object’s six degree-of-freedom (6D) pose in 3D space based on RGB(D) images. This involves estimating the position and orientation of an object in a scene and is a fundamental problem in computer vision and robotics. The optional depth channel can be obtained with various technologies, such as time-of-flight (ToF) sensors, structured light sensors, or stereo camera setups.

 

HubbleSolution
HubbleM scaled
RealHubbleTest

 

Semantic segmentation

Semantic segmentation is a computer vision task in which the goal is to categorize each pixel in an image into a class and produce a dense pixel-wise segmentation map of the entire image this way. It can help identify objects, their properties, and their relationships within the scene. Models are usually evaluated with the Mean Intersection-Over-Union (Mean IoU) and Pixel Accuracy metrics.

Back to the Moon

A new start after 50 years. This time using tools boosted with artificial intelligence…

moon transformed2

Project timeline

Ray-tracing offline render

  • 2024

    LEO/MEO/GEO/HEO orbits

  • 2025

    Moon orbits (100 km+)

  • 2026

    Moon surface

Realtime render

  • 2024

    GEO/HEO orbits

  • 2025

    MEO orbits, Moon orbits (100 km+)

  • 2026

    LEO Orbits, Moon surface

API and Simulation

  • 2024

    MInD interoperability, n-body simulation

  • 2025

    Object library, modding, scene definition, compact AI solutions on edge

  • 2026

    Test environment for 3rd party solutions