phixtral-2x2_8
P
Phixtral 2x2 8
Overview :
Phixtral-2x2_8 is the first mixed expert model built from two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture. It surpasses the performance of each individual expert model. This model excels in multiple benchmark datasets including AGIEval, GPT4All, TruthfulQA, and Bigbench. It utilizes a customized version of the mergekit library (mixtral branch) with specific configurations. Users can run Phixtral at 4-bit precision on free T4 GPUs via Colab notebooks. The model has 4.46B parameters and utilizes F16 tensor type.
Target Users :
Suitable for use cases such as text generation, model evaluation, and deep learning research.
Total Visits: 29.7M
Top Region: US(17.94%)
Website Views : 61.0K
Use Cases
Use the phixtral-2x2_8 model for text generation in deep learning research
Utilize the phixtral-2x2_8 model for model evaluation
Run the phixtral-2x2_8 model on a GPU at 4-bit precision
Features
Generate text using a mixed expert model
Run the model on Colab notebooks at 4-bit precision on GPUs
Model size is 4.46B parameters, tensor type is F16
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase