![]() Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models. Andy Matuschak's thread on adapting this to 13B, using fine tuning weights by Sam Witteveen. To give you an idea about the quality of this list, the average number of Github stars is 3,874. ![]() This is an extremely competitive list and it carefully picks the best Android apps written in Java that are currently open source. Inspired by Simon Willison's getting started guide for LLaMA. You can learn by reading the source code or build something on top of existing projects. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Here is a screenshot of an interactive session running on Pixel 7 Pro phone:Īlpaca.cpp by Kevin Kwok Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. Install termux on your device and run termux-setup-storage to get access to your SD card.įinally, copy the llama binary and the model files to your device storage. $ cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/ -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-23 -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod.
0 Comments
Leave a Reply. |