peft online with Winfy

We have hosted the application peft in order to run this application in our online workstations with Wine or directly.


Quick description about peft:

Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.

Features:
  • Accelerate for large scale models leveraging DeepSpeed and Big Model Inference
  • Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware
  • GPU memory required for adapting LLMs on the few-shot dataset
  • Parameter Efficient Tuning of Diffusion Models
  • GPU memory required by different settings
  • Parameter Efficient Tuning of LLMs for RLHF components such as Ranker and Policy


Programming Language: Python.
Categories:
Large Language Models (LLM)

©2024. Winfy. All Rights Reserved.

By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.