We have hosted the application tiny cuda neural networks in order to run this application in our online workstations with Wine or directly.


Quick description about tiny cuda neural networks:

This is a small, self-contained framework for training and querying neural networks. Most notably, it contains a lightning-fast "fully fused" multi-layer perceptron (technical paper), a versatile multiresolution hash encoding (technical paper), as well as support for various other input encodings, losses, and optimizers. We provide a sample application where an image function (x,y) -> (R,G,B) is learned. The fully fused MLP component of this framework requires a very large amount of shared memory in its default configuration. It will likely only work on an RTX 3090, an RTX 2080 Ti, or high-end enterprise GPUs. Lower-end cards must reduce the n_neurons parameter or use the CutlassMLP (better compatibility but slower) instead. tiny-cuda-nn comes with a PyTorch extension that allows using the fast MLPs and input encodings from within a Python context. These bindings can be significantly faster than full Python implementations; in particular for the multiresolution hash encoding.

Features:
  • Tiny CUDA neural networks have a simple C++/CUDA API
  • Learn a 2D image
  • Requires an NVIDIA GPU
  • Requires Windows: Visual Studio 2019
  • Requires Linux: GCC/G++ 7.5 or higher
  • Requires CUDA v10.2 or higher and CMake v3.21 or higher.


Programming Language: C++.
Categories:
Frameworks, Machine Learning

Page navigation:

©2024. Winfy. All Rights Reserved.

By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.