Llama-7b-Chinese本地推理

Llama-7b-Chinese 本地推理

基础环境信息(wsl2安装Ubuntu22.04 + miniconda)

使用miniconda搭建环境



(base) :~$ conda create --name Llama-7b-Chinese  python=3.10
Channels:
 - defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /home/chop/miniconda3/envs/Llama-7b-Chinese

  added / updated specs:
    - python=3.10


The following NEW packages will be INSTALLED:

  _libgcc_mutex      anaconda/pkgs/main/linux-64::_libgcc_mutex-0.1-main
  _openmp_mutex      anaconda/pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu
  bzip2              anaconda/pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_5
  ca-certificates    anaconda/pkgs/main/linux-64::ca-certificates-2024.3.11-h06a4308_0
  ld_impl_linux-64   anaconda/pkgs/main/linux-64::ld_impl_linux-64-2.38-h1181459_1
  libffi             anaconda/pkgs/main/linux-64::libffi-3.4.4-h6a678d5_0
  libgcc-ng          anaconda/pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1
  libgomp            anaconda/pkgs/main/linux-64::libgomp-11.2.0-h1234567_1
  libstdcxx-ng       anaconda/pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1
  libuuid            anaconda/pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0
  ncurses            anaconda/pkgs/main/linux-64::ncurses-6.4-h6a678d5_0
  openssl            anaconda/pkgs/main/linux-64::openssl-3.0.13-h7f8727e_0
  pip                anaconda/pkgs/main/linux-64::pip-23.3.1-py310h06a4308_0
  python             anaconda/pkgs/main/linux-64::python-3.10.14-h955ad1f_0
  readline           anaconda/pkgs/main/linux-64::readline-8.2-h5eee18b_0
  setuptools         anaconda/pkgs/main/linux-64::setuptools-68.2.2-py310h06a4308_0
  sqlite             anaconda/pkgs/main/linux-64::sqlite-3.41.2-h5eee18b_0
  tk                 anaconda/pkgs/main/linux-64::tk-8.6.12-h1ccaba5_0
  tzdata             anaconda/pkgs/main/noarch::tzdata-2024a-h04d1e81_0
  wheel              anaconda/pkgs/main/linux-64::wheel-0.41.2-py310h06a4308_0
  xz                 anaconda/pkgs/main/linux-64::xz-5.4.6-h5eee18b_0
  zlib               anaconda/pkgs/main/linux-64::zlib-1.2.13-h5eee18b_0


Proceed ([y]/n)? y


Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate Llama-7b-Chinese
#
# To deactivate an active environment, use
#
#     $ conda deactivate

(base) :~$ conda activate Llama-7b-Chinese


下载llama.cpp

(Llama-7b-Chinese) :~/newmodels$git init
(Llama-7b-Chinese) :~/newmodels$ git clone https://github.com/ggerganov/llama.cpp.git
(Llama-7b-Chinese) :~/newmodels$ cd llama.cpp/
(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ls
CMakeLists.txt     common.o                       ggml-backend.c    ggml-sycl.h                  llama.o           sampling.o
LICENSE            console.o                      ggml-backend.h    ggml-vulkan-shaders.hpp      llama2-tuili.py   save-load-state
Makefile           convert-hf-to-gguf.py          ggml-backend.o    ggml-vulkan.cpp              llava-cli         scripts
Package.swift      convert-llama-ggml-to-gguf.py  ggml-cuda.cu      ggml-vulkan.h                lookahead         server
README-Origin.md   convert-llama2c-to-ggml        ggml-cuda.h       ggml.c                       lookup            simple
README-sycl.md     convert-lora-to-ggml.py        ggml-cuda.o       ggml.h                       main              speculative
README.md          convert-persimmon-to-gguf.py   ggml-impl.h       ggml.o                       main.log          spm-headers
SHA256SUMS         convert.py                     ggml-kompute.cpp  ggml_vk_generate_shaders.py  media             stage_1_convert.sh
awq-py             direct_trainsformers.py        ggml-kompute.h    gguf                         models            stage_5_test_token.sh
baby-llama         docs                           ggml-metal.h      gguf-py                      mypy.ini          tests
batched            embedding                      ggml-metal.m      grammar-parser.o             parallel          tokenize
batched-bench      examples                       ggml-metal.metal  grammars                     passkey           train-text-from-scratch
beam-search        export-lora                    ggml-mpi.c        imatrix                      perplexity        train.o
benchmark-matmult  finetune                       ggml-mpi.h        infill                       pocs              unicode.h
build-info.o       flake.lock                     ggml-opencl.cpp   kompute                      prompts           vdot
build.zig          flake.nix                      ggml-opencl.h     kompute-shaders              q8dot             zh-models
ci                 ggml-alloc.c                   ggml-quants.c     libllava.a                   quantize
cmake              ggml-alloc.h                   ggml-quants.h     llama-bench                  quantize-stats
codecov.yml        ggml-alloc.o                   ggml-quants.o     llama.cpp                    requirements
common             ggml-backend-impl.h            ggml-sycl.cpp     llama.h                      requirements.txt
(Llama-7b-Chinese) :~/newmodels/llama.cpp$

安装所需要的软件包


(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ls
AUTHORS            convert-hf-to-gguf.py          ggml-common.h            ggml-vulkan.h                llava-cli         retrieval
CMakeLists.txt     convert-llama-ggml-to-gguf.py  ggml-cuda                ggml.c                       lookahead         sampling.o
LICENSE            convert-llama2c-to-ggml        ggml-cuda.cu             ggml.h                       lookup            save-load-state
Makefile           convert-lora-to-ggml.py        ggml-cuda.h              ggml.o                       lookup-create     scripts
Package.swift      convert-persimmon-to-gguf.py   ggml-impl.h              ggml_vk_generate_shaders.py  lookup-merge      server
README-sycl.md     convert.py                     ggml-kompute.cpp         gguf                         lookup-stats      sgemm.cpp
README.md          docs                           ggml-kompute.h           gguf-py                      main              sgemm.h
SECURITY.md        embedding                      ggml-metal.h             gguf-split                   media             simple
baby-llama         eval-callback                  ggml-metal.m             grammar-parser.o             models            speculative
batched            examples                       ggml-metal.metal         grammars                     mypy.ini          spm-headers
batched-bench      export-lora                    ggml-mpi.c               gritlm                       ngram-cache.o     tests
beam-search        finetune                       ggml-mpi.h               imatrix                      parallel          tokenize
benchmark-matmult  flake.lock                     ggml-opencl.cpp          infill                       passkey           train-text-from-scratch
build-info.o       flake.nix                      ggml-opencl.h            json-schema-to-grammar.o     perplexity        train.o
build.zig          ggml-alloc.c                   ggml-quants.c            kompute                      pocs              unicode-data.cpp
ci                 ggml-alloc.h                   ggml-quants.h            kompute-shaders              prompts           unicode-data.h
cmake              ggml-alloc.o                   ggml-quants.o            libllava.a                   q8dot             unicode-data.o
codecov.yml        ggml-backend-impl.h            ggml-sycl.cpp            llama-bench                  quantize          unicode.cpp
common             ggml-backend.c                 ggml-sycl.h              llama.cpp                    quantize-stats    unicode.h
common.o           ggml-backend.h                 ggml-vulkan-shaders.hpp  llama.h                      requirements      unicode.o
console.o          ggml-backend.o                 ggml-vulkan.cpp          llama.o                      requirements.txt  vdot
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip install -r requirements.txt
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting numpy~=1.24.4 (from -r ./requirements/requirements-convert.txt (line 1))
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/10/be/ae5bf4737cb79ba437879915791f6f26d92583c738d7d960ad94e5c36adf/numpy-1.24.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 3.5 MB/s eta 0:00:00
Collecting sentencepiece~=0.1.98 (from -r ./requirements/requirements-convert.txt (line 2))
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/7f/e5/323dc813b3e1339305f888d035e2f3725084fc4dcf051995b366dd26cc90/sentencepiece-0.1.99-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 2.8 MB/s eta 0:00:00
Requirement already satisfied: transformers<5.0.0,>=4.35.2 in /home/chop/.local/lib/python3.10/site-packages (from -r ./requirements/requirements-convert.txt (line 3)) (4.39.0)
Collecting gguf>=0.1.0 (from -r ./requirements/requirements-convert.txt (line 4))
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/97/a4/83969343abb00fe787de5965c5c1f617aa51b2e2c563d4391c402aba548f/gguf-0.6.0-py3-none-any.whl (23 kB)
Collecting protobuf<5.0.0,>=4.21.0 (from -r ./requirements/requirements-convert.txt (line 5))
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/15/db/7f731524fe0e56c6b2eb57d05b55d3badd80ef7d1f1ed59db191b2fdd8ab/protobuf-4.25.3-cp37-abi3-manylinux2014_x86_64.whl (294 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 294.6/294.6 kB 2.0 MB/s eta 0:00:00
Collecting torch~=2.1.1 (from -r ./requirements/requirements-convert-hf-to-gguf.txt (line 2))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/03/f1/13137340776dd5d5bcfd2574c9c6dfcc7618285035cd77240496e5c1a79b/torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl (670.2 MB)
Collecting einops~=0.7.0 (from -r ./requirements/requirements-convert-hf-to-gguf.txt (line 3))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/0b/2d1c0ebfd092e25935b86509a9a817159212d82aa43d7fb07eca4eeff2c2/einops-0.7.0-py3-none-any.whl (44 kB)
Requirement already satisfied: filelock in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (3.13.4)
Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (0.22.2)
Requirement already satisfied: packaging>=20.0 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (24.0)
Collecting pyyaml>=5.1 (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/29/61/bf33c6c85c55bc45a29eee3195848ff2d518d84735eb0e2d8cb42e0d285e/PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (705 kB)
Requirement already satisfied: regex!=2019.12.17 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (2024.4.16)
Requirement already satisfied: requests in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (2.31.0)
Requirement already satisfied: tokenizers<0.19,>=0.14 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (0.15.2)
Requirement already satisfied: safetensors>=0.4.1 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (0.4.3)
Requirement already satisfied: tqdm>=4.27 in /home/chop/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (4.66.2)
Requirement already satisfied: typing-extensions in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (4.11.0)
Requirement already satisfied: sympy in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.12)
Requirement already satisfied: networkx in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.3)
Requirement already satisfied: jinja2 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.1.3)
Requirement already satisfied: fsspec in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2024.3.1)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.0.106)
Collecting nvidia-nccl-cu12==2.18.1 (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/a4/05/23f8f38eec3d28e4915725b233c24d8f1a33cb6540a882f7b54be1befa02/nvidia_nccl_cu12-2.18.1-py3-none-manylinux1_x86_64.whl (209.8 MB)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/chop/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Collecting triton==2.1.0 (from torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2))
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/4d/22/91a8af421c8a8902dde76e6ef3db01b258af16c53d81e8c0d0dc13900a9e/triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/chop/.local/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.4.127)
Requirement already satisfied: MarkupSafe>=2.0 in /home/chop/.local/lib/python3.10/site-packages (from jinja2->torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.5)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/chop/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/chop/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/chop/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in /home/chop/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r ./requirements/requirements-convert.txt (line 3)) (2024.2.2)
Requirement already satisfied: mpmath>=0.19 in /home/chop/.local/lib/python3.10/site-packages (from sympy->torch~=2.1.1->-r ./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.3.0)
Installing collected packages: sentencepiece, triton, pyyaml, protobuf, nvidia-nccl-cu12, numpy, einops, gguf, torch
  Attempting uninstall: sentencepiece
    Found existing installation: sentencepiece 0.2.0
    Uninstalling sentencepiece-0.2.0:
      Successfully uninstalled sentencepiece-0.2.0
  Attempting uninstall: protobuf
    Found existing installation: protobuf 5.26.1
    Uninstalling protobuf-5.26.1:
      Successfully uninstalled protobuf-5.26.1
  Attempting uninstall: numpy
    Found existing installation: numpy 1.26.4
    Uninstalling numpy-1.26.4:
      Successfully uninstalled numpy-1.26.4
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
matplotlib 3.8.4 requires pyparsing>=2.3.1, which is not installed.
tensorboard 2.16.2 requires six>1.9, which is not installed.
Successfully installed einops-0.7.0 gguf-0.6.0 numpy-1.24.4 nvidia-nccl-cu12-2.18.1 protobuf-4.25.3 pyyaml-6.0.1 sentencepiece-0.1.99 torch-2.1.2 triton-2.1.0
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip install pyparsing
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting pyparsing
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/9d/ea/6d76df31432a0e6fdf81681a895f009a4bb47b3c39036db3e1b528191d52/pyparsing-3.1.2-py3-none-any.whl (103 kB)
Installing collected packages: pyparsing
Successfully installed pyparsing-3.1.2
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip install matplotlib
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: matplotlib in /home/chop/.local/lib/python3.10/site-packages (3.8.4)
Requirement already satisfied: contourpy>=1.0.1 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (1.2.1)
Requirement already satisfied: cycler>=0.10 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (4.51.0)
Requirement already satisfied: kiwisolver>=1.3.1 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (1.4.5)
Requirement already satisfied: numpy>=1.21 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from matplotlib) (1.24.4)
Requirement already satisfied: packaging>=20.0 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (24.0)
Requirement already satisfied: pillow>=8 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (10.3.0)
Requirement already satisfied: pyparsing>=2.3.1 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from matplotlib) (3.1.2)
Requirement already satisfied: python-dateutil>=2.7 in /home/chop/.local/lib/python3.10/site-packages (from matplotlib) (2.9.0.post0)
Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib)
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl (11 kB)
Installing collected packages: six
Successfully installed six-1.16.0
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip install six
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: six in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (1.16.0)
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip install tensorboard
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: tensorboard in /home/chop/.local/lib/python3.10/site-packages (2.16.2)
Requirement already satisfied: absl-py>=0.4 in /home/chop/.local/lib/python3.10/site-packages (from tensorboard) (2.1.0)
Requirement already satisfied: grpcio>=1.48.2 in /home/chop/.local/lib/python3.10/site-packages (from tensorboard) (1.62.2)
Requirement already satisfied: markdown>=2.6.8 in /home/chop/.local/lib/python3.10/site-packages (from tensorboard) (3.6)
Requirement already satisfied: numpy>=1.12.0 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from tensorboard) (1.24.4)
Requirement already satisfied: protobuf!=4.24.0,>=3.19.6 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from tensorboard) (4.25.3)
Requirement already satisfied: setuptools>=41.0.0 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from tensorboard) (68.2.2)
Requirement already satisfied: six>1.9 in /home/chop/miniconda3/envs/Llama-7b-Chinese/lib/python3.10/site-packages (from tensorboard) (1.16.0)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /home/chop/.local/lib/python3.10/site-packages (from tensorboard) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in /home/chop/.local/lib/python3.10/site-packages (from tensorboard) (3.0.2)
Requirement already satisfied: MarkupSafe>=2.1.1 in /home/chop/.local/lib/python3.10/site-packages (from werkzeug>=1.0.1->tensorboard) (2.1.5)
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$ pip list
Package                   Version
------------------------- -----------
absl-py                   2.1.0
accelerate                0.27.2
aiofiles                  23.2.1
aiohttp                   3.9.5
aiosignal                 1.3.1
altair                    5.3.0
annotated-types           0.6.0
anyio                     4.3.0
async-timeout             4.0.3
attrs                     23.2.0
bitsandbytes              0.42.0
certifi                   2024.2.2
charset-normalizer        3.3.2
click                     8.1.7
contourpy                 1.2.1
cycler                    0.12.1
datasets                  2.19.0
dill                      0.3.8
einops                    0.7.0
evaluate                  0.4.1
exceptiongroup            1.2.1
fastapi                   0.110.2
ffmpy                     0.3.2
filelock                  3.13.4
fonttools                 4.51.0
frozenlist                1.4.1
fsspec                    2024.3.1
gekko                     1.0.6
gguf                      0.6.0
gradio                    4.27.0
gradio_client             0.15.1
grpcio                    1.62.2
h11                       0.14.0
httpcore                  1.0.5
httpx                     0.27.0
huggingface-hub           0.22.2
idna                      3.7
importlib_resources       6.4.0
iniconfig                 2.0.0
Jinja2                    3.1.3
joblib                    1.4.0
jsonschema                4.21.1
jsonschema-specifications 2023.12.1
kiwisolver                1.4.5
Markdown                  3.6
markdown-it-py            3.0.0
MarkupSafe                2.1.5
matplotlib                3.8.4
mdurl                     0.1.2
mpmath                    1.3.0
multidict                 6.0.5
multiprocess              0.70.16
networkx                  3.3
numpy                     1.24.4
nvidia-cublas-cu12        12.1.3.1
nvidia-cuda-cupti-cu12    12.1.105
nvidia-cuda-nvrtc-cu12    12.1.105
nvidia-cuda-runtime-cu12  12.1.105
nvidia-cudnn-cu12         8.9.2.26
nvidia-cufft-cu12         11.0.2.54
nvidia-curand-cu12        10.3.2.106
nvidia-cusolver-cu12      11.4.5.107
nvidia-cusparse-cu12      12.1.0.106
nvidia-nccl-cu12          2.18.1
nvidia-nvjitlink-cu12     12.4.127
nvidia-nvtx-cu12          12.1.105
orjson                    3.10.1
packaging                 24.0
pandas                    2.2.2
peft                      0.8.2
pillow                    10.3.0
pip                       23.3.1
pluggy                    1.5.0
protobuf                  4.25.3
psutil                    5.9.8
pyarrow                   16.0.0
pyarrow-hotfix            0.6
pydantic                  2.7.0
pydantic_core             2.18.1
pydub                     0.25.1
Pygments                  2.17.2
pyparsing                 3.1.2
pytest                    8.1.1
python-dateutil           2.9.0.post0
python-multipart          0.0.9
pytz                      2024.1
PyYAML                    6.0.1
referencing               0.34.0
regex                     2024.4.16
requests                  2.31.0
responses                 0.18.0
rich                      13.7.1
rpds-py                   0.18.0
ruff                      0.4.1
safetensors               0.4.3
scikit-learn              1.4.2
scipy                     1.13.0
semantic-version          2.10.0
sentencepiece             0.1.99
setuptools                68.2.2
shellingham               1.5.4
six                       1.16.0
sniffio                   1.3.1
starlette                 0.37.2
sympy                     1.12
tensorboard               2.16.2
tensorboard-data-server   0.7.2
threadpoolctl             3.4.0
tokenizers                0.15.2
tomli                     2.0.1
tomlkit                   0.12.0
toolz                     0.12.1
torch                     2.1.2
torchdata                 0.7.1
tqdm                      4.66.2
transformers              4.39.0
triton                    2.1.0
typer                     0.12.3
typing_extensions         4.11.0
tzdata                    2024.1
urllib3                   2.2.1
uvicorn                   0.29.0
websockets                11.0.3
Werkzeug                  3.0.2
wheel                     0.41.2
xxhash                    3.4.1
yarl                      1.9.4
(Llama-7b-Chinese) :~/Chinese-LLaMA-Alpaca/llama.cpp$


llama.cpp编译

使用GPU编译:


(Llama-7b-Chinese) :~/newmodels/llama.cpp$ make LLAMA_CUBLAS=1 LLAMA_CUDA_NVCC=/usr/local/cuda/bin/nvcc


I llama.cpp build info:
I UNAME_S:   Linux
I UNAME_P:   x86_64
I UNAME_M:   x86_64
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion
I CXXFLAGS:  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi
I NVCCFLAGS: -use_fast_math --forward-unknown-to-host-compiler -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128
I LDFLAGS:   -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
I CC:        cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
I CXX:       g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

cc  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion    -c ggml.c -o ggml.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c llama.cpp -o llama.o
llama.cpp:7101:26: warning: no previous declaration for ‘std::vector<std::__cxx11::basic_string<char> > splitString(const string&, const std::vector<std::__cxx11::basic_string<char> >&)[-Wmissing-declarations]
 7101 | std::vector<std::string> splitString(const std::string& input, const std::vector<std::string>& delimiters) {
      |                          ^~~~~~~~~~~
llama.cpp: In function ‘std::vector<std::__cxx11::basic_string<char> > splitString(const string&, const std::vector<std::__cxx11::basic_string<char> >&)’:
llama.cpp:7104:12: warning: unused variable ‘foundPos’ [-Wunused-variable]
 7104 |     size_t foundPos = 0;
      |            ^~~~~~~~
llama.cpp: At global scope:
llama.cpp:7134:6: warning: no previous declaration for ‘bool is_spectial_token(const string&, const std::vector<std::__cxx11::basic_string<char> >&)[-Wmissing-declarations]
 7134 | bool is_spectial_token(const std::string & text, const std::vector<std::string>& delimiters) {
      |      ^~~~~~~~~~~~~~~~~
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/common.cpp -o common.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/sampling.cpp -o sampling.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/grammar-parser.cpp -o grammar-parser.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/build-info.cpp -o build-info.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/console.cpp -o console.o
/usr/local/cuda/bin/nvcc -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -use_fast_math --forward-unknown-to-host-compiler -arch=native -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DK_QUANTS_PER_ITERATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128  -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi" -c ggml-cuda.cu -o ggml-cuda.o
cc  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion    -c ggml-alloc.c -o ggml-alloc.o
cc  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion    -c ggml-backend.c -o ggml-backend.o
cc -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion     -c ggml-quants.c -o ggml-quants.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/main/main.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o console.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o main -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib

====  Run ./main -h for help.  ====

g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/quantize/quantize.cpp build-info.o ggml.o llama.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o quantize -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/quantize-stats/quantize-stats.cpp build-info.o ggml.o llama.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o quantize-stats -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/perplexity/perplexity.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o perplexity -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/imatrix/imatrix.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o imatrix -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/embedding/embedding.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o embedding -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi pocs/vdot/vdot.cpp ggml.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o vdot -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi pocs/vdot/q8dot.cpp ggml.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o q8dot -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -c common/train.cpp -o train.o
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/train-text-from-scratch/train-text-from-scratch.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o train.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o train-text-from-scratch -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp ggml.o llama.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o convert-llama2c-to-ggml -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/simple/simple.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o simple -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/batched/batched.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o batched -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/batched-bench/batched-bench.cpp build-info.o ggml.o llama.o common.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o batched-bench -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/save-load-state/save-load-state.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o save-load-state -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -Iexamples/server examples/server/server.cpp examples/llava/clip.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o server -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib   -Wno-cast-qual
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/gguf/gguf.cpp ggml.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o gguf -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/llama-bench/llama-bench.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o llama-bench -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi -static -fPIC -c examples/llava/llava.cpp -o libllava.a -Wno-cast-qual
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/llava/llava-cli.cpp examples/llava/clip.cpp examples/llava/llava.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o llava-cli -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib  -Wno-cast-qual
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/baby-llama/baby-llama.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o train.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o baby-llama -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/beam-search/beam-search.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o beam-search -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/speculative/speculative.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o speculative -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/infill/infill.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o console.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o infill -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/tokenize/tokenize.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o tokenize -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/benchmark/benchmark-matmult.cpp build-info.o ggml.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o benchmark-matmult -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/parallel/parallel.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o parallel -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/finetune/finetune.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o train.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o finetune -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/export-lora/export-lora.cpp ggml.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o export-lora -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/lookahead/lookahead.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o lookahead -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/lookup/lookup.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o lookup -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
g++ -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -march=native -mtune=native -Wno-array-bounds -Wno-format-truncation -Wextra-semi examples/passkey/passkey.cpp ggml.o llama.o common.o sampling.o grammar-parser.o build-info.o ggml-cuda.o ggml-alloc.o ggml-backend.o ggml-quants.o -o passkey -lcuda -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/targets/x86_64-linux/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/wsl/lib
cc -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/targets/x86_64-linux/include -I/usr/local/cuda/targets/aarch64-linux/include  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -march=native -mtune=native -Wdouble-promotion  -c tests/test-c.c -o tests/test-c.o
(Llama-7b-Chinese) :~/newmodels/llama.cpp$

下载模型:Llama-7b-Chinese

​ 你可以从以下来源下载Llama-7b-Chinese模型。

https://github.com/LlamaFamily/Llama-Chinese?tab=readme-ov-file

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

(Llama-7b-Chinese) :~/Llama-7b-Chinese$ ls
Llama2中文社区.jpeg  checklist.chk  consolidated.00.pth     params.json              tokenizer.model          tokenizer_config.json
Llama2中文社区.txt   config.json    generation_config.json  special_tokens_map.json  tokenizer_checklist.chk

模型复制

将模型复制到llama.cpp的models目录下
(Llama-7b-Chinese) :~$ cp Llama-7b-Chinese ./newmodels/llama.cpp/models/
(Llama-7b-Chinese) :~/newmodels/llama.cpp/models$ ls
Llama-7b-Chinese          ggml-vocab-falcon.gguf    ggml-vocab-llama.gguf   ggml-vocab-stablelm-3b-4e1t.gguf
ggml-vocab-aquila.gguf    ggml-vocab-gpt-neox.gguf  ggml-vocab-mpt.gguf     ggml-vocab-starcoder.gguf
ggml-vocab-baichuan.gguf  ggml-vocab-gpt2.gguf      ggml-vocab-refact.gguf

模型转换

手动转换模型文件为GGUF格式
(Llama-7b-Chinese) :~/newmodels/llama.cpp$ python convert.py models/Llama-7b-Chinese/
Loading model file models/Llama-7b-Chinese/consolidated.00.pth
params = Params(n_vocab=32000, n_embd=4096, n_layer=32, n_ctx=2048, n_ff=11008, n_head=32, n_head_kv=32, n_experts=None, n_experts_used=None, f_norm_eps=1e-05, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models/Llama-7b-Chinese'))
Found vocab files: {'tokenizer.model': PosixPath('models/Llama-7b-Chinese/tokenizer.model'), 'vocab.json': None, 'tokenizer.json': None}
Loading vocab file 'models/Llama-7b-Chinese/tokenizer.model', type 'spm'
Vocab info: <SentencePieceVocab with 32000 base tokens and 0 added tokens>
Special vocab info: <SpecialVocab with 0 merges, special tokens {'bos': 1, 'eos': 2, 'pad': 0}, add special tokens {'bos': True, 'eos': False}>
tok_embeddings.weight                            -> token_embd.weight                        | BF16   | [32000, 4096]
norm.weight                                      -> output_norm.weight                       | BF16   | [4096]
output.weight                                    -> output.weight                            | BF16   | [32000, 4096]
layers.0.attention.wq.weight                     -> blk.0.attn_q.weight                      | BF16   | [4096, 4096]
layers.0.attention.wk.weight                     -> blk.0.attn_k.weight                      | BF16   | [4096, 4096]
layers.0.attention.wv.weight                     -> blk.0.attn_v.weight                      | BF16   | [4096, 4096]
layers.0.attention.wo.weight                     -> blk.0.attn_output.weight                 | BF16   | [4096, 4096]
layers.0.feed_forward.w1.weight                  -> blk.0.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.0.feed_forward.w2.weight                  -> blk.0.ffn_down.weight                    | BF16   | [4096, 11008]
layers.0.feed_forward.w3.weight                  -> blk.0.ffn_up.weight                      | BF16   | [11008, 4096]
layers.0.attention_norm.weight                   -> blk.0.attn_norm.weight                   | BF16   | [4096]
layers.0.ffn_norm.weight                         -> blk.0.ffn_norm.weight                    | BF16   | [4096]
layers.1.attention.wq.weight                     -> blk.1.attn_q.weight                      | BF16   | [4096, 4096]
layers.1.attention.wk.weight                     -> blk.1.attn_k.weight                      | BF16   | [4096, 4096]
layers.1.attention.wv.weight                     -> blk.1.attn_v.weight                      | BF16   | [4096, 4096]
layers.1.attention.wo.weight                     -> blk.1.attn_output.weight                 | BF16   | [4096, 4096]
layers.1.feed_forward.w1.weight                  -> blk.1.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.1.feed_forward.w2.weight                  -> blk.1.ffn_down.weight                    | BF16   | [4096, 11008]
layers.1.feed_forward.w3.weight                  -> blk.1.ffn_up.weight                      | BF16   | [11008, 4096]
layers.1.attention_norm.weight                   -> blk.1.attn_norm.weight                   | BF16   | [4096]
layers.1.ffn_norm.weight                         -> blk.1.ffn_norm.weight                    | BF16   | [4096]
layers.2.attention.wq.weight                     -> blk.2.attn_q.weight                      | BF16   | [4096, 4096]
layers.2.attention.wk.weight                     -> blk.2.attn_k.weight                      | BF16   | [4096, 4096]
layers.2.attention.wv.weight                     -> blk.2.attn_v.weight                      | BF16   | [4096, 4096]
layers.2.attention.wo.weight                     -> blk.2.attn_output.weight                 | BF16   | [4096, 4096]
layers.2.feed_forward.w1.weight                  -> blk.2.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.2.feed_forward.w2.weight                  -> blk.2.ffn_down.weight                    | BF16   | [4096, 11008]
layers.2.feed_forward.w3.weight                  -> blk.2.ffn_up.weight                      | BF16   | [11008, 4096]
layers.2.attention_norm.weight                   -> blk.2.attn_norm.weight                   | BF16   | [4096]
layers.2.ffn_norm.weight                         -> blk.2.ffn_norm.weight                    | BF16   | [4096]
layers.3.attention.wq.weight                     -> blk.3.attn_q.weight                      | BF16   | [4096, 4096]
layers.3.attention.wk.weight                     -> blk.3.attn_k.weight                      | BF16   | [4096, 4096]
layers.3.attention.wv.weight                     -> blk.3.attn_v.weight                      | BF16   | [4096, 4096]
layers.3.attention.wo.weight                     -> blk.3.attn_output.weight                 | BF16   | [4096, 4096]
layers.3.feed_forward.w1.weight                  -> blk.3.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.3.feed_forward.w2.weight                  -> blk.3.ffn_down.weight                    | BF16   | [4096, 11008]
layers.3.feed_forward.w3.weight                  -> blk.3.ffn_up.weight                      | BF16   | [11008, 4096]
layers.3.attention_norm.weight                   -> blk.3.attn_norm.weight                   | BF16   | [4096]
layers.3.ffn_norm.weight                         -> blk.3.ffn_norm.weight                    | BF16   | [4096]
layers.4.attention.wq.weight                     -> blk.4.attn_q.weight                      | BF16   | [4096, 4096]
layers.4.attention.wk.weight                     -> blk.4.attn_k.weight                      | BF16   | [4096, 4096]
layers.4.attention.wv.weight                     -> blk.4.attn_v.weight                      | BF16   | [4096, 4096]
layers.4.attention.wo.weight                     -> blk.4.attn_output.weight                 | BF16   | [4096, 4096]
layers.4.feed_forward.w1.weight                  -> blk.4.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.4.feed_forward.w2.weight                  -> blk.4.ffn_down.weight                    | BF16   | [4096, 11008]
layers.4.feed_forward.w3.weight                  -> blk.4.ffn_up.weight                      | BF16   | [11008, 4096]
layers.4.attention_norm.weight                   -> blk.4.attn_norm.weight                   | BF16   | [4096]
layers.4.ffn_norm.weight                         -> blk.4.ffn_norm.weight                    | BF16   | [4096]
layers.5.attention.wq.weight                     -> blk.5.attn_q.weight                      | BF16   | [4096, 4096]
layers.5.attention.wk.weight                     -> blk.5.attn_k.weight                      | BF16   | [4096, 4096]
layers.5.attention.wv.weight                     -> blk.5.attn_v.weight                      | BF16   | [4096, 4096]
layers.5.attention.wo.weight                     -> blk.5.attn_output.weight                 | BF16   | [4096, 4096]
layers.5.feed_forward.w1.weight                  -> blk.5.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.5.feed_forward.w2.weight                  -> blk.5.ffn_down.weight                    | BF16   | [4096, 11008]
layers.5.feed_forward.w3.weight                  -> blk.5.ffn_up.weight                      | BF16   | [11008, 4096]
layers.5.attention_norm.weight                   -> blk.5.attn_norm.weight                   | BF16   | [4096]
layers.5.ffn_norm.weight                         -> blk.5.ffn_norm.weight                    | BF16   | [4096]
layers.6.attention.wq.weight                     -> blk.6.attn_q.weight                      | BF16   | [4096, 4096]
layers.6.attention.wk.weight                     -> blk.6.attn_k.weight                      | BF16   | [4096, 4096]
layers.6.attention.wv.weight                     -> blk.6.attn_v.weight                      | BF16   | [4096, 4096]
layers.6.attention.wo.weight                     -> blk.6.attn_output.weight                 | BF16   | [4096, 4096]
layers.6.feed_forward.w1.weight                  -> blk.6.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.6.feed_forward.w2.weight                  -> blk.6.ffn_down.weight                    | BF16   | [4096, 11008]
layers.6.feed_forward.w3.weight                  -> blk.6.ffn_up.weight                      | BF16   | [11008, 4096]
layers.6.attention_norm.weight                   -> blk.6.attn_norm.weight                   | BF16   | [4096]
layers.6.ffn_norm.weight                         -> blk.6.ffn_norm.weight                    | BF16   | [4096]
layers.7.attention.wq.weight                     -> blk.7.attn_q.weight                      | BF16   | [4096, 4096]
layers.7.attention.wk.weight                     -> blk.7.attn_k.weight                      | BF16   | [4096, 4096]
layers.7.attention.wv.weight                     -> blk.7.attn_v.weight                      | BF16   | [4096, 4096]
layers.7.attention.wo.weight                     -> blk.7.attn_output.weight                 | BF16   | [4096, 4096]
layers.7.feed_forward.w1.weight                  -> blk.7.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.7.feed_forward.w2.weight                  -> blk.7.ffn_down.weight                    | BF16   | [4096, 11008]
layers.7.feed_forward.w3.weight                  -> blk.7.ffn_up.weight                      | BF16   | [11008, 4096]
layers.7.attention_norm.weight                   -> blk.7.attn_norm.weight                   | BF16   | [4096]
layers.7.ffn_norm.weight                         -> blk.7.ffn_norm.weight                    | BF16   | [4096]
layers.8.attention.wq.weight                     -> blk.8.attn_q.weight                      | BF16   | [4096, 4096]
layers.8.attention.wk.weight                     -> blk.8.attn_k.weight                      | BF16   | [4096, 4096]
layers.8.attention.wv.weight                     -> blk.8.attn_v.weight                      | BF16   | [4096, 4096]
layers.8.attention.wo.weight                     -> blk.8.attn_output.weight                 | BF16   | [4096, 4096]
layers.8.feed_forward.w1.weight                  -> blk.8.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.8.feed_forward.w2.weight                  -> blk.8.ffn_down.weight                    | BF16   | [4096, 11008]
layers.8.feed_forward.w3.weight                  -> blk.8.ffn_up.weight                      | BF16   | [11008, 4096]
layers.8.attention_norm.weight                   -> blk.8.attn_norm.weight                   | BF16   | [4096]
layers.8.ffn_norm.weight                         -> blk.8.ffn_norm.weight                    | BF16   | [4096]
layers.9.attention.wq.weight                     -> blk.9.attn_q.weight                      | BF16   | [4096, 4096]
layers.9.attention.wk.weight                     -> blk.9.attn_k.weight                      | BF16   | [4096, 4096]
layers.9.attention.wv.weight                     -> blk.9.attn_v.weight                      | BF16   | [4096, 4096]
layers.9.attention.wo.weight                     -> blk.9.attn_output.weight                 | BF16   | [4096, 4096]
layers.9.feed_forward.w1.weight                  -> blk.9.ffn_gate.weight                    | BF16   | [11008, 4096]
layers.9.feed_forward.w2.weight                  -> blk.9.ffn_down.weight                    | BF16   | [4096, 11008]
layers.9.feed_forward.w3.weight                  -> blk.9.ffn_up.weight                      | BF16   | [11008, 4096]
layers.9.attention_norm.weight                   -> blk.9.attn_norm.weight                   | BF16   | [4096]
layers.9.ffn_norm.weight                         -> blk.9.ffn_norm.weight                    | BF16   | [4096]
layers.10.attention.wq.weight                    -> blk.10.attn_q.weight                     | BF16   | [4096, 4096]
layers.10.attention.wk.weight                    -> blk.10.attn_k.weight                     | BF16   | [4096, 4096]
layers.10.attention.wv.weight                    -> blk.10.attn_v.weight                     | BF16   | [4096, 4096]
layers.10.attention.wo.weight                    -> blk.10.attn_output.weight                | BF16   | [4096, 4096]
layers.10.feed_forward.w1.weight                 -> blk.10.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.10.feed_forward.w2.weight                 -> blk.10.ffn_down.weight                   | BF16   | [4096, 11008]
layers.10.feed_forward.w3.weight                 -> blk.10.ffn_up.weight                     | BF16   | [11008, 4096]
layers.10.attention_norm.weight                  -> blk.10.attn_norm.weight                  | BF16   | [4096]
layers.10.ffn_norm.weight                        -> blk.10.ffn_norm.weight                   | BF16   | [4096]
layers.11.attention.wq.weight                    -> blk.11.attn_q.weight                     | BF16   | [4096, 4096]
layers.11.attention.wk.weight                    -> blk.11.attn_k.weight                     | BF16   | [4096, 4096]
layers.11.attention.wv.weight                    -> blk.11.attn_v.weight                     | BF16   | [4096, 4096]
layers.11.attention.wo.weight                    -> blk.11.attn_output.weight                | BF16   | [4096, 4096]
layers.11.feed_forward.w1.weight                 -> blk.11.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.11.feed_forward.w2.weight                 -> blk.11.ffn_down.weight                   | BF16   | [4096, 11008]
layers.11.feed_forward.w3.weight                 -> blk.11.ffn_up.weight                     | BF16   | [11008, 4096]
layers.11.attention_norm.weight                  -> blk.11.attn_norm.weight                  | BF16   | [4096]
layers.11.ffn_norm.weight                        -> blk.11.ffn_norm.weight                   | BF16   | [4096]
layers.12.attention.wq.weight                    -> blk.12.attn_q.weight                     | BF16   | [4096, 4096]
layers.12.attention.wk.weight                    -> blk.12.attn_k.weight                     | BF16   | [4096, 4096]
layers.12.attention.wv.weight                    -> blk.12.attn_v.weight                     | BF16   | [4096, 4096]
layers.12.attention.wo.weight                    -> blk.12.attn_output.weight                | BF16   | [4096, 4096]
layers.12.feed_forward.w1.weight                 -> blk.12.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.12.feed_forward.w2.weight                 -> blk.12.ffn_down.weight                   | BF16   | [4096, 11008]
layers.12.feed_forward.w3.weight                 -> blk.12.ffn_up.weight                     | BF16   | [11008, 4096]
layers.12.attention_norm.weight                  -> blk.12.attn_norm.weight                  | BF16   | [4096]
layers.12.ffn_norm.weight                        -> blk.12.ffn_norm.weight                   | BF16   | [4096]
layers.13.attention.wq.weight                    -> blk.13.attn_q.weight                     | BF16   | [4096, 4096]
layers.13.attention.wk.weight                    -> blk.13.attn_k.weight                     | BF16   | [4096, 4096]
layers.13.attention.wv.weight                    -> blk.13.attn_v.weight                     | BF16   | [4096, 4096]
layers.13.attention.wo.weight                    -> blk.13.attn_output.weight                | BF16   | [4096, 4096]
layers.13.feed_forward.w1.weight                 -> blk.13.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.13.feed_forward.w2.weight                 -> blk.13.ffn_down.weight                   | BF16   | [4096, 11008]
layers.13.feed_forward.w3.weight                 -> blk.13.ffn_up.weight                     | BF16   | [11008, 4096]
layers.13.attention_norm.weight                  -> blk.13.attn_norm.weight                  | BF16   | [4096]
layers.13.ffn_norm.weight                        -> blk.13.ffn_norm.weight                   | BF16   | [4096]
layers.14.attention.wq.weight                    -> blk.14.attn_q.weight                     | BF16   | [4096, 4096]
layers.14.attention.wk.weight                    -> blk.14.attn_k.weight                     | BF16   | [4096, 4096]
layers.14.attention.wv.weight                    -> blk.14.attn_v.weight                     | BF16   | [4096, 4096]
layers.14.attention.wo.weight                    -> blk.14.attn_output.weight                | BF16   | [4096, 4096]
layers.14.feed_forward.w1.weight                 -> blk.14.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.14.feed_forward.w2.weight                 -> blk.14.ffn_down.weight                   | BF16   | [4096, 11008]
layers.14.feed_forward.w3.weight                 -> blk.14.ffn_up.weight                     | BF16   | [11008, 4096]
layers.14.attention_norm.weight                  -> blk.14.attn_norm.weight                  | BF16   | [4096]
layers.14.ffn_norm.weight                        -> blk.14.ffn_norm.weight                   | BF16   | [4096]
layers.15.attention.wq.weight                    -> blk.15.attn_q.weight                     | BF16   | [4096, 4096]
layers.15.attention.wk.weight                    -> blk.15.attn_k.weight                     | BF16   | [4096, 4096]
layers.15.attention.wv.weight                    -> blk.15.attn_v.weight                     | BF16   | [4096, 4096]
layers.15.attention.wo.weight                    -> blk.15.attn_output.weight                | BF16   | [4096, 4096]
layers.15.feed_forward.w1.weight                 -> blk.15.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.15.feed_forward.w2.weight                 -> blk.15.ffn_down.weight                   | BF16   | [4096, 11008]
layers.15.feed_forward.w3.weight                 -> blk.15.ffn_up.weight                     | BF16   | [11008, 4096]
layers.15.attention_norm.weight                  -> blk.15.attn_norm.weight                  | BF16   | [4096]
layers.15.ffn_norm.weight                        -> blk.15.ffn_norm.weight                   | BF16   | [4096]
layers.16.attention.wq.weight                    -> blk.16.attn_q.weight                     | BF16   | [4096, 4096]
layers.16.attention.wk.weight                    -> blk.16.attn_k.weight                     | BF16   | [4096, 4096]
layers.16.attention.wv.weight                    -> blk.16.attn_v.weight                     | BF16   | [4096, 4096]
layers.16.attention.wo.weight                    -> blk.16.attn_output.weight                | BF16   | [4096, 4096]
layers.16.feed_forward.w1.weight                 -> blk.16.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.16.feed_forward.w2.weight                 -> blk.16.ffn_down.weight                   | BF16   | [4096, 11008]
layers.16.feed_forward.w3.weight                 -> blk.16.ffn_up.weight                     | BF16   | [11008, 4096]
layers.16.attention_norm.weight                  -> blk.16.attn_norm.weight                  | BF16   | [4096]
layers.16.ffn_norm.weight                        -> blk.16.ffn_norm.weight                   | BF16   | [4096]
layers.17.attention.wq.weight                    -> blk.17.attn_q.weight                     | BF16   | [4096, 4096]
layers.17.attention.wk.weight                    -> blk.17.attn_k.weight                     | BF16   | [4096, 4096]
layers.17.attention.wv.weight                    -> blk.17.attn_v.weight                     | BF16   | [4096, 4096]
layers.17.attention.wo.weight                    -> blk.17.attn_output.weight                | BF16   | [4096, 4096]
layers.17.feed_forward.w1.weight                 -> blk.17.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.17.feed_forward.w2.weight                 -> blk.17.ffn_down.weight                   | BF16   | [4096, 11008]
layers.17.feed_forward.w3.weight                 -> blk.17.ffn_up.weight                     | BF16   | [11008, 4096]
layers.17.attention_norm.weight                  -> blk.17.attn_norm.weight                  | BF16   | [4096]
layers.17.ffn_norm.weight                        -> blk.17.ffn_norm.weight                   | BF16   | [4096]
layers.18.attention.wq.weight                    -> blk.18.attn_q.weight                     | BF16   | [4096, 4096]
layers.18.attention.wk.weight                    -> blk.18.attn_k.weight                     | BF16   | [4096, 4096]
layers.18.attention.wv.weight                    -> blk.18.attn_v.weight                     | BF16   | [4096, 4096]
layers.18.attention.wo.weight                    -> blk.18.attn_output.weight                | BF16   | [4096, 4096]
layers.18.feed_forward.w1.weight                 -> blk.18.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.18.feed_forward.w2.weight                 -> blk.18.ffn_down.weight                   | BF16   | [4096, 11008]
layers.18.feed_forward.w3.weight                 -> blk.18.ffn_up.weight                     | BF16   | [11008, 4096]
layers.18.attention_norm.weight                  -> blk.18.attn_norm.weight                  | BF16   | [4096]
layers.18.ffn_norm.weight                        -> blk.18.ffn_norm.weight                   | BF16   | [4096]
layers.19.attention.wq.weight                    -> blk.19.attn_q.weight                     | BF16   | [4096, 4096]
layers.19.attention.wk.weight                    -> blk.19.attn_k.weight                     | BF16   | [4096, 4096]
layers.19.attention.wv.weight                    -> blk.19.attn_v.weight                     | BF16   | [4096, 4096]
layers.19.attention.wo.weight                    -> blk.19.attn_output.weight                | BF16   | [4096, 4096]
layers.19.feed_forward.w1.weight                 -> blk.19.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.19.feed_forward.w2.weight                 -> blk.19.ffn_down.weight                   | BF16   | [4096, 11008]
layers.19.feed_forward.w3.weight                 -> blk.19.ffn_up.weight                     | BF16   | [11008, 4096]
layers.19.attention_norm.weight                  -> blk.19.attn_norm.weight                  | BF16   | [4096]
layers.19.ffn_norm.weight                        -> blk.19.ffn_norm.weight                   | BF16   | [4096]
layers.20.attention.wq.weight                    -> blk.20.attn_q.weight                     | BF16   | [4096, 4096]
layers.20.attention.wk.weight                    -> blk.20.attn_k.weight                     | BF16   | [4096, 4096]
layers.20.attention.wv.weight                    -> blk.20.attn_v.weight                     | BF16   | [4096, 4096]
layers.20.attention.wo.weight                    -> blk.20.attn_output.weight                | BF16   | [4096, 4096]
layers.20.feed_forward.w1.weight                 -> blk.20.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.20.feed_forward.w2.weight                 -> blk.20.ffn_down.weight                   | BF16   | [4096, 11008]
layers.20.feed_forward.w3.weight                 -> blk.20.ffn_up.weight                     | BF16   | [11008, 4096]
layers.20.attention_norm.weight                  -> blk.20.attn_norm.weight                  | BF16   | [4096]
layers.20.ffn_norm.weight                        -> blk.20.ffn_norm.weight                   | BF16   | [4096]
layers.21.attention.wq.weight                    -> blk.21.attn_q.weight                     | BF16   | [4096, 4096]
layers.21.attention.wk.weight                    -> blk.21.attn_k.weight                     | BF16   | [4096, 4096]
layers.21.attention.wv.weight                    -> blk.21.attn_v.weight                     | BF16   | [4096, 4096]
layers.21.attention.wo.weight                    -> blk.21.attn_output.weight                | BF16   | [4096, 4096]
layers.21.feed_forward.w1.weight                 -> blk.21.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.21.feed_forward.w2.weight                 -> blk.21.ffn_down.weight                   | BF16   | [4096, 11008]
layers.21.feed_forward.w3.weight                 -> blk.21.ffn_up.weight                     | BF16   | [11008, 4096]
layers.21.attention_norm.weight                  -> blk.21.attn_norm.weight                  | BF16   | [4096]
layers.21.ffn_norm.weight                        -> blk.21.ffn_norm.weight                   | BF16   | [4096]
layers.22.attention.wq.weight                    -> blk.22.attn_q.weight                     | BF16   | [4096, 4096]
layers.22.attention.wk.weight                    -> blk.22.attn_k.weight                     | BF16   | [4096, 4096]
layers.22.attention.wv.weight                    -> blk.22.attn_v.weight                     | BF16   | [4096, 4096]
layers.22.attention.wo.weight                    -> blk.22.attn_output.weight                | BF16   | [4096, 4096]
layers.22.feed_forward.w1.weight                 -> blk.22.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.22.feed_forward.w2.weight                 -> blk.22.ffn_down.weight                   | BF16   | [4096, 11008]
layers.22.feed_forward.w3.weight                 -> blk.22.ffn_up.weight                     | BF16   | [11008, 4096]
layers.22.attention_norm.weight                  -> blk.22.attn_norm.weight                  | BF16   | [4096]
layers.22.ffn_norm.weight                        -> blk.22.ffn_norm.weight                   | BF16   | [4096]
layers.23.attention.wq.weight                    -> blk.23.attn_q.weight                     | BF16   | [4096, 4096]
layers.23.attention.wk.weight                    -> blk.23.attn_k.weight                     | BF16   | [4096, 4096]
layers.23.attention.wv.weight                    -> blk.23.attn_v.weight                     | BF16   | [4096, 4096]
layers.23.attention.wo.weight                    -> blk.23.attn_output.weight                | BF16   | [4096, 4096]
layers.23.feed_forward.w1.weight                 -> blk.23.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.23.feed_forward.w2.weight                 -> blk.23.ffn_down.weight                   | BF16   | [4096, 11008]
layers.23.feed_forward.w3.weight                 -> blk.23.ffn_up.weight                     | BF16   | [11008, 4096]
layers.23.attention_norm.weight                  -> blk.23.attn_norm.weight                  | BF16   | [4096]
layers.23.ffn_norm.weight                        -> blk.23.ffn_norm.weight                   | BF16   | [4096]
layers.24.attention.wq.weight                    -> blk.24.attn_q.weight                     | BF16   | [4096, 4096]
layers.24.attention.wk.weight                    -> blk.24.attn_k.weight                     | BF16   | [4096, 4096]
layers.24.attention.wv.weight                    -> blk.24.attn_v.weight                     | BF16   | [4096, 4096]
layers.24.attention.wo.weight                    -> blk.24.attn_output.weight                | BF16   | [4096, 4096]
layers.24.feed_forward.w1.weight                 -> blk.24.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.24.feed_forward.w2.weight                 -> blk.24.ffn_down.weight                   | BF16   | [4096, 11008]
layers.24.feed_forward.w3.weight                 -> blk.24.ffn_up.weight                     | BF16   | [11008, 4096]
layers.24.attention_norm.weight                  -> blk.24.attn_norm.weight                  | BF16   | [4096]
layers.24.ffn_norm.weight                        -> blk.24.ffn_norm.weight                   | BF16   | [4096]
layers.25.attention.wq.weight                    -> blk.25.attn_q.weight                     | BF16   | [4096, 4096]
layers.25.attention.wk.weight                    -> blk.25.attn_k.weight                     | BF16   | [4096, 4096]
layers.25.attention.wv.weight                    -> blk.25.attn_v.weight                     | BF16   | [4096, 4096]
layers.25.attention.wo.weight                    -> blk.25.attn_output.weight                | BF16   | [4096, 4096]
layers.25.feed_forward.w1.weight                 -> blk.25.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.25.feed_forward.w2.weight                 -> blk.25.ffn_down.weight                   | BF16   | [4096, 11008]
layers.25.feed_forward.w3.weight                 -> blk.25.ffn_up.weight                     | BF16   | [11008, 4096]
layers.25.attention_norm.weight                  -> blk.25.attn_norm.weight                  | BF16   | [4096]
layers.25.ffn_norm.weight                        -> blk.25.ffn_norm.weight                   | BF16   | [4096]
layers.26.attention.wq.weight                    -> blk.26.attn_q.weight                     | BF16   | [4096, 4096]
layers.26.attention.wk.weight                    -> blk.26.attn_k.weight                     | BF16   | [4096, 4096]
layers.26.attention.wv.weight                    -> blk.26.attn_v.weight                     | BF16   | [4096, 4096]
layers.26.attention.wo.weight                    -> blk.26.attn_output.weight                | BF16   | [4096, 4096]
layers.26.feed_forward.w1.weight                 -> blk.26.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.26.feed_forward.w2.weight                 -> blk.26.ffn_down.weight                   | BF16   | [4096, 11008]
layers.26.feed_forward.w3.weight                 -> blk.26.ffn_up.weight                     | BF16   | [11008, 4096]
layers.26.attention_norm.weight                  -> blk.26.attn_norm.weight                  | BF16   | [4096]
layers.26.ffn_norm.weight                        -> blk.26.ffn_norm.weight                   | BF16   | [4096]
layers.27.attention.wq.weight                    -> blk.27.attn_q.weight                     | BF16   | [4096, 4096]
layers.27.attention.wk.weight                    -> blk.27.attn_k.weight                     | BF16   | [4096, 4096]
layers.27.attention.wv.weight                    -> blk.27.attn_v.weight                     | BF16   | [4096, 4096]
layers.27.attention.wo.weight                    -> blk.27.attn_output.weight                | BF16   | [4096, 4096]
layers.27.feed_forward.w1.weight                 -> blk.27.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.27.feed_forward.w2.weight                 -> blk.27.ffn_down.weight                   | BF16   | [4096, 11008]
layers.27.feed_forward.w3.weight                 -> blk.27.ffn_up.weight                     | BF16   | [11008, 4096]
layers.27.attention_norm.weight                  -> blk.27.attn_norm.weight                  | BF16   | [4096]
layers.27.ffn_norm.weight                        -> blk.27.ffn_norm.weight                   | BF16   | [4096]
layers.28.attention.wq.weight                    -> blk.28.attn_q.weight                     | BF16   | [4096, 4096]
layers.28.attention.wk.weight                    -> blk.28.attn_k.weight                     | BF16   | [4096, 4096]
layers.28.attention.wv.weight                    -> blk.28.attn_v.weight                     | BF16   | [4096, 4096]
layers.28.attention.wo.weight                    -> blk.28.attn_output.weight                | BF16   | [4096, 4096]
layers.28.feed_forward.w1.weight                 -> blk.28.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.28.feed_forward.w2.weight                 -> blk.28.ffn_down.weight                   | BF16   | [4096, 11008]
layers.28.feed_forward.w3.weight                 -> blk.28.ffn_up.weight                     | BF16   | [11008, 4096]
layers.28.attention_norm.weight                  -> blk.28.attn_norm.weight                  | BF16   | [4096]
layers.28.ffn_norm.weight                        -> blk.28.ffn_norm.weight                   | BF16   | [4096]
layers.29.attention.wq.weight                    -> blk.29.attn_q.weight                     | BF16   | [4096, 4096]
layers.29.attention.wk.weight                    -> blk.29.attn_k.weight                     | BF16   | [4096, 4096]
layers.29.attention.wv.weight                    -> blk.29.attn_v.weight                     | BF16   | [4096, 4096]
layers.29.attention.wo.weight                    -> blk.29.attn_output.weight                | BF16   | [4096, 4096]
layers.29.feed_forward.w1.weight                 -> blk.29.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.29.feed_forward.w2.weight                 -> blk.29.ffn_down.weight                   | BF16   | [4096, 11008]
layers.29.feed_forward.w3.weight                 -> blk.29.ffn_up.weight                     | BF16   | [11008, 4096]
layers.29.attention_norm.weight                  -> blk.29.attn_norm.weight                  | BF16   | [4096]
layers.29.ffn_norm.weight                        -> blk.29.ffn_norm.weight                   | BF16   | [4096]
layers.30.attention.wq.weight                    -> blk.30.attn_q.weight                     | BF16   | [4096, 4096]
layers.30.attention.wk.weight                    -> blk.30.attn_k.weight                     | BF16   | [4096, 4096]
layers.30.attention.wv.weight                    -> blk.30.attn_v.weight                     | BF16   | [4096, 4096]
layers.30.attention.wo.weight                    -> blk.30.attn_output.weight                | BF16   | [4096, 4096]
layers.30.feed_forward.w1.weight                 -> blk.30.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.30.feed_forward.w2.weight                 -> blk.30.ffn_down.weight                   | BF16   | [4096, 11008]
layers.30.feed_forward.w3.weight                 -> blk.30.ffn_up.weight                     | BF16   | [11008, 4096]
layers.30.attention_norm.weight                  -> blk.30.attn_norm.weight                  | BF16   | [4096]
layers.30.ffn_norm.weight                        -> blk.30.ffn_norm.weight                   | BF16   | [4096]
layers.31.attention.wq.weight                    -> blk.31.attn_q.weight                     | BF16   | [4096, 4096]
layers.31.attention.wk.weight                    -> blk.31.attn_k.weight                     | BF16   | [4096, 4096]
layers.31.attention.wv.weight                    -> blk.31.attn_v.weight                     | BF16   | [4096, 4096]
layers.31.attention.wo.weight                    -> blk.31.attn_output.weight                | BF16   | [4096, 4096]
layers.31.feed_forward.w1.weight                 -> blk.31.ffn_gate.weight                   | BF16   | [11008, 4096]
layers.31.feed_forward.w2.weight                 -> blk.31.ffn_down.weight                   | BF16   | [4096, 11008]
layers.31.feed_forward.w3.weight                 -> blk.31.ffn_up.weight                     | BF16   | [11008, 4096]
layers.31.attention_norm.weight                  -> blk.31.attn_norm.weight                  | BF16   | [4096]
layers.31.ffn_norm.weight                        -> blk.31.ffn_norm.weight                   | BF16   | [4096]
skipping tensor rope_freqs
Writing models/Llama-7b-Chinese/ggml-model-f16.gguf, format 1
Ignoring added_tokens.json since model matches vocab size without it.
gguf: This GGUF file is for Little Endian only
gguf: Setting special token type bos to 1
gguf: Setting special token type eos to 2
gguf: Setting special token type pad to 0
gguf: Setting add_bos_token to True
gguf: Setting add_eos_token to False
[  1/291] Writing tensor token_embd.weight                      | size  32000 x   4096  | type F16  | T+   2
[  2/291] Writing tensor output_norm.weight                     | size   4096           | type F32  | T+   2
[  3/291] Writing tensor output.weight                          | size  32000 x   4096  | type F16  | T+   2
[  4/291] Writing tensor blk.0.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   3
[  5/291] Writing tensor blk.0.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   3
[  6/291] Writing tensor blk.0.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   3
[  7/291] Writing tensor blk.0.attn_output.weight               | size   4096 x   4096  | type F16  | T+   3
[  8/291] Writing tensor blk.0.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   3
[  9/291] Writing tensor blk.0.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   3
[ 10/291] Writing tensor blk.0.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   3
[ 11/291] Writing tensor blk.0.attn_norm.weight                 | size   4096           | type F32  | T+   3
[ 12/291] Writing tensor blk.0.ffn_norm.weight                  | size   4096           | type F32  | T+   3
[ 13/291] Writing tensor blk.1.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 14/291] Writing tensor blk.1.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 15/291] Writing tensor blk.1.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 16/291] Writing tensor blk.1.attn_output.weight               | size   4096 x   4096  | type F16  | T+   3
[ 17/291] Writing tensor blk.1.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   3
[ 18/291] Writing tensor blk.1.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   3
[ 19/291] Writing tensor blk.1.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   3
[ 20/291] Writing tensor blk.1.attn_norm.weight                 | size   4096           | type F32  | T+   3
[ 21/291] Writing tensor blk.1.ffn_norm.weight                  | size   4096           | type F32  | T+   3
[ 22/291] Writing tensor blk.2.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 23/291] Writing tensor blk.2.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 24/291] Writing tensor blk.2.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   3
[ 25/291] Writing tensor blk.2.attn_output.weight               | size   4096 x   4096  | type F16  | T+   4
[ 26/291] Writing tensor blk.2.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   4
[ 27/291] Writing tensor blk.2.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   4
[ 28/291] Writing tensor blk.2.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   4
[ 29/291] Writing tensor blk.2.attn_norm.weight                 | size   4096           | type F32  | T+   4
[ 30/291] Writing tensor blk.2.ffn_norm.weight                  | size   4096           | type F32  | T+   4
[ 31/291] Writing tensor blk.3.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   4
[ 32/291] Writing tensor blk.3.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   4
[ 33/291] Writing tensor blk.3.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   5
[ 34/291] Writing tensor blk.3.attn_output.weight               | size   4096 x   4096  | type F16  | T+   5
[ 35/291] Writing tensor blk.3.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   5
[ 36/291] Writing tensor blk.3.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   5
[ 37/291] Writing tensor blk.3.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   5
[ 38/291] Writing tensor blk.3.attn_norm.weight                 | size   4096           | type F32  | T+   5
[ 39/291] Writing tensor blk.3.ffn_norm.weight                  | size   4096           | type F32  | T+   5
[ 40/291] Writing tensor blk.4.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   5
[ 41/291] Writing tensor blk.4.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   6
[ 42/291] Writing tensor blk.4.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   6
[ 43/291] Writing tensor blk.4.attn_output.weight               | size   4096 x   4096  | type F16  | T+   6
[ 44/291] Writing tensor blk.4.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   6
[ 45/291] Writing tensor blk.4.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   6
[ 46/291] Writing tensor blk.4.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   6
[ 47/291] Writing tensor blk.4.attn_norm.weight                 | size   4096           | type F32  | T+   6
[ 48/291] Writing tensor blk.4.ffn_norm.weight                  | size   4096           | type F32  | T+   6
[ 49/291] Writing tensor blk.5.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   6
[ 50/291] Writing tensor blk.5.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   6
[ 51/291] Writing tensor blk.5.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   6
[ 52/291] Writing tensor blk.5.attn_output.weight               | size   4096 x   4096  | type F16  | T+   7
[ 53/291] Writing tensor blk.5.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   7
[ 54/291] Writing tensor blk.5.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   7
[ 55/291] Writing tensor blk.5.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   7
[ 56/291] Writing tensor blk.5.attn_norm.weight                 | size   4096           | type F32  | T+   7
[ 57/291] Writing tensor blk.5.ffn_norm.weight                  | size   4096           | type F32  | T+   7
[ 58/291] Writing tensor blk.6.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   7
[ 59/291] Writing tensor blk.6.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   7
[ 60/291] Writing tensor blk.6.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   7
[ 61/291] Writing tensor blk.6.attn_output.weight               | size   4096 x   4096  | type F16  | T+   7
[ 62/291] Writing tensor blk.6.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   7
[ 63/291] Writing tensor blk.6.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   8
[ 64/291] Writing tensor blk.6.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   8
[ 65/291] Writing tensor blk.6.attn_norm.weight                 | size   4096           | type F32  | T+   8
[ 66/291] Writing tensor blk.6.ffn_norm.weight                  | size   4096           | type F32  | T+   8
[ 67/291] Writing tensor blk.7.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   8
[ 68/291] Writing tensor blk.7.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   8
[ 69/291] Writing tensor blk.7.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   8
[ 70/291] Writing tensor blk.7.attn_output.weight               | size   4096 x   4096  | type F16  | T+   8
[ 71/291] Writing tensor blk.7.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   8
[ 72/291] Writing tensor blk.7.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   9
[ 73/291] Writing tensor blk.7.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+   9
[ 74/291] Writing tensor blk.7.attn_norm.weight                 | size   4096           | type F32  | T+   9
[ 75/291] Writing tensor blk.7.ffn_norm.weight                  | size   4096           | type F32  | T+   9
[ 76/291] Writing tensor blk.8.attn_q.weight                    | size   4096 x   4096  | type F16  | T+   9
[ 77/291] Writing tensor blk.8.attn_k.weight                    | size   4096 x   4096  | type F16  | T+   9
[ 78/291] Writing tensor blk.8.attn_v.weight                    | size   4096 x   4096  | type F16  | T+   9
[ 79/291] Writing tensor blk.8.attn_output.weight               | size   4096 x   4096  | type F16  | T+   9
[ 80/291] Writing tensor blk.8.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+   9
[ 81/291] Writing tensor blk.8.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+   9
[ 82/291] Writing tensor blk.8.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+  10
[ 83/291] Writing tensor blk.8.attn_norm.weight                 | size   4096           | type F32  | T+  10
[ 84/291] Writing tensor blk.8.ffn_norm.weight                  | size   4096           | type F32  | T+  10
[ 85/291] Writing tensor blk.9.attn_q.weight                    | size   4096 x   4096  | type F16  | T+  10
[ 86/291] Writing tensor blk.9.attn_k.weight                    | size   4096 x   4096  | type F16  | T+  10
[ 87/291] Writing tensor blk.9.attn_v.weight                    | size   4096 x   4096  | type F16  | T+  10
[ 88/291] Writing tensor blk.9.attn_output.weight               | size   4096 x   4096  | type F16  | T+  10
[ 89/291] Writing tensor blk.9.ffn_gate.weight                  | size  11008 x   4096  | type F16  | T+  10
[ 90/291] Writing tensor blk.9.ffn_down.weight                  | size   4096 x  11008  | type F16  | T+  10
[ 91/291] Writing tensor blk.9.ffn_up.weight                    | size  11008 x   4096  | type F16  | T+  10
[ 92/291] Writing tensor blk.9.attn_norm.weight                 | size   4096           | type F32  | T+  10
[ 93/291] Writing tensor blk.9.ffn_norm.weight                  | size   4096           | type F32  | T+  10
[ 94/291] Writing tensor blk.10.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  10
[ 95/291] Writing tensor blk.10.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  10
[ 96/291] Writing tensor blk.10.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  10
[ 97/291] Writing tensor blk.10.attn_output.weight              | size   4096 x   4096  | type F16  | T+  10
[ 98/291] Writing tensor blk.10.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  10
[ 99/291] Writing tensor blk.10.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  11
[100/291] Writing tensor blk.10.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  11
[101/291] Writing tensor blk.10.attn_norm.weight                | size   4096           | type F32  | T+  11
[102/291] Writing tensor blk.10.ffn_norm.weight                 | size   4096           | type F32  | T+  11
[103/291] Writing tensor blk.11.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  11
[104/291] Writing tensor blk.11.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  11
[105/291] Writing tensor blk.11.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  11
[106/291] Writing tensor blk.11.attn_output.weight              | size   4096 x   4096  | type F16  | T+  11
[107/291] Writing tensor blk.11.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  11
[108/291] Writing tensor blk.11.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  12
[109/291] Writing tensor blk.11.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  12
[110/291] Writing tensor blk.11.attn_norm.weight                | size   4096           | type F32  | T+  12
[111/291] Writing tensor blk.11.ffn_norm.weight                 | size   4096           | type F32  | T+  12
[112/291] Writing tensor blk.12.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  12
[113/291] Writing tensor blk.12.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  12
[114/291] Writing tensor blk.12.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  12
[115/291] Writing tensor blk.12.attn_output.weight              | size   4096 x   4096  | type F16  | T+  12
[116/291] Writing tensor blk.12.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  12
[117/291] Writing tensor blk.12.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  12
[118/291] Writing tensor blk.12.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  13
[119/291] Writing tensor blk.12.attn_norm.weight                | size   4096           | type F32  | T+  13
[120/291] Writing tensor blk.12.ffn_norm.weight                 | size   4096           | type F32  | T+  13
[121/291] Writing tensor blk.13.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  13
[122/291] Writing tensor blk.13.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  13
[123/291] Writing tensor blk.13.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  13
[124/291] Writing tensor blk.13.attn_output.weight              | size   4096 x   4096  | type F16  | T+  13
[125/291] Writing tensor blk.13.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  13
[126/291] Writing tensor blk.13.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  13
[127/291] Writing tensor blk.13.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  13
[128/291] Writing tensor blk.13.attn_norm.weight                | size   4096           | type F32  | T+  13
[129/291] Writing tensor blk.13.ffn_norm.weight                 | size   4096           | type F32  | T+  13
[130/291] Writing tensor blk.14.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  13
[131/291] Writing tensor blk.14.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  13
[132/291] Writing tensor blk.14.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  13
[133/291] Writing tensor blk.14.attn_output.weight              | size   4096 x   4096  | type F16  | T+  13
[134/291] Writing tensor blk.14.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  13
[135/291] Writing tensor blk.14.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  13
[136/291] Writing tensor blk.14.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  14
[137/291] Writing tensor blk.14.attn_norm.weight                | size   4096           | type F32  | T+  14
[138/291] Writing tensor blk.14.ffn_norm.weight                 | size   4096           | type F32  | T+  14
[139/291] Writing tensor blk.15.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  14
[140/291] Writing tensor blk.15.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  14
[141/291] Writing tensor blk.15.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  14
[142/291] Writing tensor blk.15.attn_output.weight              | size   4096 x   4096  | type F16  | T+  14
[143/291] Writing tensor blk.15.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  14
[144/291] Writing tensor blk.15.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  14
[145/291] Writing tensor blk.15.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  14
[146/291] Writing tensor blk.15.attn_norm.weight                | size   4096           | type F32  | T+  14
[147/291] Writing tensor blk.15.ffn_norm.weight                 | size   4096           | type F32  | T+  14
[148/291] Writing tensor blk.16.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  14
[149/291] Writing tensor blk.16.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  14
[150/291] Writing tensor blk.16.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  14
[151/291] Writing tensor blk.16.attn_output.weight              | size   4096 x   4096  | type F16  | T+  14
[152/291] Writing tensor blk.16.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  14
[153/291] Writing tensor blk.16.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  14
[154/291] Writing tensor blk.16.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  15
[155/291] Writing tensor blk.16.attn_norm.weight                | size   4096           | type F32  | T+  15
[156/291] Writing tensor blk.16.ffn_norm.weight                 | size   4096           | type F32  | T+  15
[157/291] Writing tensor blk.17.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  15
[158/291] Writing tensor blk.17.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  15
[159/291] Writing tensor blk.17.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  15
[160/291] Writing tensor blk.17.attn_output.weight              | size   4096 x   4096  | type F16  | T+  15
[161/291] Writing tensor blk.17.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  15
[162/291] Writing tensor blk.17.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  15
[163/291] Writing tensor blk.17.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  15
[164/291] Writing tensor blk.17.attn_norm.weight                | size   4096           | type F32  | T+  15
[165/291] Writing tensor blk.17.ffn_norm.weight                 | size   4096           | type F32  | T+  15
[166/291] Writing tensor blk.18.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  15
[167/291] Writing tensor blk.18.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  15
[168/291] Writing tensor blk.18.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  15
[169/291] Writing tensor blk.18.attn_output.weight              | size   4096 x   4096  | type F16  | T+  15
[170/291] Writing tensor blk.18.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  15
[171/291] Writing tensor blk.18.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  16
[172/291] Writing tensor blk.18.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  16
[173/291] Writing tensor blk.18.attn_norm.weight                | size   4096           | type F32  | T+  16
[174/291] Writing tensor blk.18.ffn_norm.weight                 | size   4096           | type F32  | T+  16
[175/291] Writing tensor blk.19.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  16
[176/291] Writing tensor blk.19.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  16
[177/291] Writing tensor blk.19.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  16
[178/291] Writing tensor blk.19.attn_output.weight              | size   4096 x   4096  | type F16  | T+  16
[179/291] Writing tensor blk.19.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  16
[180/291] Writing tensor blk.19.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  16
[181/291] Writing tensor blk.19.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  16
[182/291] Writing tensor blk.19.attn_norm.weight                | size   4096           | type F32  | T+  16
[183/291] Writing tensor blk.19.ffn_norm.weight                 | size   4096           | type F32  | T+  16
[184/291] Writing tensor blk.20.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  16
[185/291] Writing tensor blk.20.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  16
[186/291] Writing tensor blk.20.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  16
[187/291] Writing tensor blk.20.attn_output.weight              | size   4096 x   4096  | type F16  | T+  16
[188/291] Writing tensor blk.20.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  16
[189/291] Writing tensor blk.20.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  17
[190/291] Writing tensor blk.20.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  17
[191/291] Writing tensor blk.20.attn_norm.weight                | size   4096           | type F32  | T+  17
[192/291] Writing tensor blk.20.ffn_norm.weight                 | size   4096           | type F32  | T+  17
[193/291] Writing tensor blk.21.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  17
[194/291] Writing tensor blk.21.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  17
[195/291] Writing tensor blk.21.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  17
[196/291] Writing tensor blk.21.attn_output.weight              | size   4096 x   4096  | type F16  | T+  17
[197/291] Writing tensor blk.21.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  17
[198/291] Writing tensor blk.21.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  17
[199/291] Writing tensor blk.21.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  17
[200/291] Writing tensor blk.21.attn_norm.weight                | size   4096           | type F32  | T+  17
[201/291] Writing tensor blk.21.ffn_norm.weight                 | size   4096           | type F32  | T+  17
[202/291] Writing tensor blk.22.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  17
[203/291] Writing tensor blk.22.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  17
[204/291] Writing tensor blk.22.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  17
[205/291] Writing tensor blk.22.attn_output.weight              | size   4096 x   4096  | type F16  | T+  17
[206/291] Writing tensor blk.22.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  17
[207/291] Writing tensor blk.22.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  17
[208/291] Writing tensor blk.22.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  18
[209/291] Writing tensor blk.22.attn_norm.weight                | size   4096           | type F32  | T+  18
[210/291] Writing tensor blk.22.ffn_norm.weight                 | size   4096           | type F32  | T+  18
[211/291] Writing tensor blk.23.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  18
[212/291] Writing tensor blk.23.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  18
[213/291] Writing tensor blk.23.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  18
[214/291] Writing tensor blk.23.attn_output.weight              | size   4096 x   4096  | type F16  | T+  18
[215/291] Writing tensor blk.23.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  18
[216/291] Writing tensor blk.23.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  18
[217/291] Writing tensor blk.23.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  18
[218/291] Writing tensor blk.23.attn_norm.weight                | size   4096           | type F32  | T+  18
[219/291] Writing tensor blk.23.ffn_norm.weight                 | size   4096           | type F32  | T+  18
[220/291] Writing tensor blk.24.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  18
[221/291] Writing tensor blk.24.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  18
[222/291] Writing tensor blk.24.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  18
[223/291] Writing tensor blk.24.attn_output.weight              | size   4096 x   4096  | type F16  | T+  18
[224/291] Writing tensor blk.24.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  18
[225/291] Writing tensor blk.24.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  18
[226/291] Writing tensor blk.24.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  18
[227/291] Writing tensor blk.24.attn_norm.weight                | size   4096           | type F32  | T+  18
[228/291] Writing tensor blk.24.ffn_norm.weight                 | size   4096           | type F32  | T+  18
[229/291] Writing tensor blk.25.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  18
[230/291] Writing tensor blk.25.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  18
[231/291] Writing tensor blk.25.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  18
[232/291] Writing tensor blk.25.attn_output.weight              | size   4096 x   4096  | type F16  | T+  18
[233/291] Writing tensor blk.25.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  19
[234/291] Writing tensor blk.25.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  19
[235/291] Writing tensor blk.25.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  19
[236/291] Writing tensor blk.25.attn_norm.weight                | size   4096           | type F32  | T+  19
[237/291] Writing tensor blk.25.ffn_norm.weight                 | size   4096           | type F32  | T+  19
[238/291] Writing tensor blk.26.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  19
[239/291] Writing tensor blk.26.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  19
[240/291] Writing tensor blk.26.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  19
[241/291] Writing tensor blk.26.attn_output.weight              | size   4096 x   4096  | type F16  | T+  19
[242/291] Writing tensor blk.26.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  19
[243/291] Writing tensor blk.26.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  19
[244/291] Writing tensor blk.26.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  19
[245/291] Writing tensor blk.26.attn_norm.weight                | size   4096           | type F32  | T+  19
[246/291] Writing tensor blk.26.ffn_norm.weight                 | size   4096           | type F32  | T+  19
[247/291] Writing tensor blk.27.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  19
[248/291] Writing tensor blk.27.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  19
[249/291] Writing tensor blk.27.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  19
[250/291] Writing tensor blk.27.attn_output.weight              | size   4096 x   4096  | type F16  | T+  19
[251/291] Writing tensor blk.27.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  19
[252/291] Writing tensor blk.27.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  20
[253/291] Writing tensor blk.27.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  20
[254/291] Writing tensor blk.27.attn_norm.weight                | size   4096           | type F32  | T+  20
[255/291] Writing tensor blk.27.ffn_norm.weight                 | size   4096           | type F32  | T+  20
[256/291] Writing tensor blk.28.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  20
[257/291] Writing tensor blk.28.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  20
[258/291] Writing tensor blk.28.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  20
[259/291] Writing tensor blk.28.attn_output.weight              | size   4096 x   4096  | type F16  | T+  20
[260/291] Writing tensor blk.28.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  20
[261/291] Writing tensor blk.28.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  20
[262/291] Writing tensor blk.28.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  20
[263/291] Writing tensor blk.28.attn_norm.weight                | size   4096           | type F32  | T+  20
[264/291] Writing tensor blk.28.ffn_norm.weight                 | size   4096           | type F32  | T+  20
[265/291] Writing tensor blk.29.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  20
[266/291] Writing tensor blk.29.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  20
[267/291] Writing tensor blk.29.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  20
[268/291] Writing tensor blk.29.attn_output.weight              | size   4096 x   4096  | type F16  | T+  20
[269/291] Writing tensor blk.29.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  20
[270/291] Writing tensor blk.29.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  20
[271/291] Writing tensor blk.29.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  21
[272/291] Writing tensor blk.29.attn_norm.weight                | size   4096           | type F32  | T+  21
[273/291] Writing tensor blk.29.ffn_norm.weight                 | size   4096           | type F32  | T+  21
[274/291] Writing tensor blk.30.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  21
[275/291] Writing tensor blk.30.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  21
[276/291] Writing tensor blk.30.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  21
[277/291] Writing tensor blk.30.attn_output.weight              | size   4096 x   4096  | type F16  | T+  21
[278/291] Writing tensor blk.30.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  21
[279/291] Writing tensor blk.30.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  21
[280/291] Writing tensor blk.30.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  21
[281/291] Writing tensor blk.30.attn_norm.weight                | size   4096           | type F32  | T+  21
[282/291] Writing tensor blk.30.ffn_norm.weight                 | size   4096           | type F32  | T+  21
[283/291] Writing tensor blk.31.attn_q.weight                   | size   4096 x   4096  | type F16  | T+  21
[284/291] Writing tensor blk.31.attn_k.weight                   | size   4096 x   4096  | type F16  | T+  21
[285/291] Writing tensor blk.31.attn_v.weight                   | size   4096 x   4096  | type F16  | T+  21
[286/291] Writing tensor blk.31.attn_output.weight              | size   4096 x   4096  | type F16  | T+  21
[287/291] Writing tensor blk.31.ffn_gate.weight                 | size  11008 x   4096  | type F16  | T+  21
[288/291] Writing tensor blk.31.ffn_down.weight                 | size   4096 x  11008  | type F16  | T+  21
[289/291] Writing tensor blk.31.ffn_up.weight                   | size  11008 x   4096  | type F16  | T+  21
[290/291] Writing tensor blk.31.attn_norm.weight                | size   4096           | type F32  | T+  21
[291/291] Writing tensor blk.31.ffn_norm.weight                 | size   4096           | type F32  | T+  21
Wrote models/Llama-7b-Chinese/ggml-model-f16.gguf
(Llama-7b-Chinese) :~/newmodels/llama.cpp$

量化操作


(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ./quantize ./models/Llama-7b-Chinese/ggml-model-f16.gguf ./models/Llama-7b-Chinese/ggml-model-q4_0.gguf q4_0
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
main: build = 2038 (7013716)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing './models/Llama-7b-Chinese/ggml-model-f16.gguf' to './models/Llama-7b-Chinese/ggml-model-q4_0.gguf' as Q4_0
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from ./models/Llama-7b-Chinese/ggml-model-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:  226 tensors
llama_model_quantize_internal: meta size = 741120 bytes
[   1/ 291]                    token_embd.weight - [ 4096, 32000,     1,     1], type =    f16, quantizing to q4_0 .. size =   250.00 MiB ->    70.31 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[   2/ 291]                   output_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[   3/ 291]                        output.weight - [ 4096, 32000,     1,     1], type =    f16, quantizing to q6_K .. size =   250.00 MiB ->   102.54 MiB
[   4/ 291]                  blk.0.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.034 0.008 0.012 0.019 0.031 0.050 0.084 0.149 0.256 0.150 0.084 0.051 0.031 0.019 0.012 0.010
[   5/ 291]                  blk.0.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.034 0.008 0.013 0.021 0.033 0.054 0.089 0.150 0.226 0.151 0.089 0.054 0.033 0.021 0.013 0.011
[   6/ 291]                  blk.0.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.024 0.036 0.053 0.074 0.096 0.117 0.129 0.117 0.096 0.074 0.053 0.036 0.024 0.020
[   7/ 291]             blk.0.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.035 0.011 0.017 0.028 0.044 0.068 0.100 0.135 0.155 0.135 0.100 0.068 0.044 0.028 0.017 0.014
[   8/ 291]                blk.0.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[   9/ 291]                blk.0.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[  10/ 291]                  blk.0.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[  11/ 291]               blk.0.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  12/ 291]                blk.0.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  13/ 291]                  blk.1.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.013 0.022 0.034 0.052 0.074 0.098 0.121 0.132 0.121 0.098 0.074 0.052 0.034 0.022 0.018
[  14/ 291]                  blk.1.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.013 0.022 0.034 0.051 0.074 0.099 0.121 0.132 0.121 0.099 0.074 0.051 0.034 0.022 0.018
[  15/ 291]                  blk.1.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.014 0.023 0.035 0.052 0.073 0.097 0.119 0.130 0.119 0.097 0.074 0.052 0.035 0.023 0.019
[  16/ 291]             blk.1.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.035 0.012 0.020 0.031 0.047 0.070 0.098 0.129 0.146 0.129 0.099 0.070 0.047 0.031 0.020 0.016
[  17/ 291]                blk.1.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  18/ 291]                blk.1.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[  19/ 291]                  blk.1.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  20/ 291]               blk.1.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  21/ 291]                blk.1.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  22/ 291]                  blk.2.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.024 0.038 0.055 0.076 0.096 0.114 0.122 0.114 0.097 0.076 0.055 0.038 0.024 0.020
[  23/ 291]                  blk.2.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.024 0.037 0.055 0.075 0.097 0.115 0.124 0.115 0.097 0.075 0.055 0.037 0.024 0.020
[  24/ 291]                  blk.2.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.120 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  25/ 291]             blk.2.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  26/ 291]                blk.2.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  27/ 291]                blk.2.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  28/ 291]                  blk.2.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  29/ 291]               blk.2.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  30/ 291]                blk.2.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  31/ 291]                  blk.3.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.113 0.120 0.113 0.097 0.076 0.056 0.038 0.025 0.020
[  32/ 291]                  blk.3.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.113 0.120 0.113 0.096 0.076 0.056 0.038 0.025 0.020
[  33/ 291]                  blk.3.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  34/ 291]             blk.3.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[  35/ 291]                blk.3.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  36/ 291]                blk.3.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  37/ 291]                  blk.3.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[  38/ 291]               blk.3.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  39/ 291]                blk.3.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  40/ 291]                  blk.4.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.038 0.025 0.021
[  41/ 291]                  blk.4.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.113 0.120 0.113 0.096 0.076 0.056 0.038 0.025 0.020
[  42/ 291]                  blk.4.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  43/ 291]             blk.4.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  44/ 291]                blk.4.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  45/ 291]                blk.4.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  46/ 291]                  blk.4.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  47/ 291]               blk.4.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  48/ 291]                blk.4.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  49/ 291]                  blk.5.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.119 0.112 0.097 0.076 0.056 0.038 0.025 0.021
[  50/ 291]                  blk.5.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.113 0.119 0.113 0.096 0.076 0.056 0.038 0.025 0.020
[  51/ 291]                  blk.5.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  52/ 291]             blk.5.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  53/ 291]                blk.5.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  54/ 291]                blk.5.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  55/ 291]                  blk.5.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[  56/ 291]               blk.5.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  57/ 291]                blk.5.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  58/ 291]                  blk.6.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  59/ 291]                  blk.6.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  60/ 291]                  blk.6.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  61/ 291]             blk.6.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.097 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  62/ 291]                blk.6.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  63/ 291]                blk.6.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  64/ 291]                  blk.6.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  65/ 291]               blk.6.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  66/ 291]                blk.6.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  67/ 291]                  blk.7.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.076 0.056 0.039 0.025 0.021
[  68/ 291]                  blk.7.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  69/ 291]                  blk.7.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  70/ 291]             blk.7.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.056 0.039 0.025 0.021
[  71/ 291]                blk.7.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  72/ 291]                blk.7.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  73/ 291]                  blk.7.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  74/ 291]               blk.7.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  75/ 291]                blk.7.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  76/ 291]                  blk.8.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.097 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  77/ 291]                  blk.8.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  78/ 291]                  blk.8.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  79/ 291]             blk.8.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  80/ 291]                blk.8.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  81/ 291]                blk.8.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  82/ 291]                  blk.8.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  83/ 291]               blk.8.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  84/ 291]                blk.8.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  85/ 291]                  blk.9.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  86/ 291]                  blk.9.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  87/ 291]                  blk.9.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  88/ 291]             blk.9.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  89/ 291]                blk.9.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  90/ 291]                blk.9.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  91/ 291]                  blk.9.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[  92/ 291]               blk.9.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  93/ 291]                blk.9.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[  94/ 291]                 blk.10.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[  95/ 291]                 blk.10.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[  96/ 291]                 blk.10.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[  97/ 291]            blk.10.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.026 0.021
[  98/ 291]               blk.10.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[  99/ 291]               blk.10.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 100/ 291]                 blk.10.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 101/ 291]              blk.10.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 102/ 291]               blk.10.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 103/ 291]                 blk.11.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 104/ 291]                 blk.11.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 105/ 291]                 blk.11.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.038 0.025 0.021
[ 106/ 291]            blk.11.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 107/ 291]               blk.11.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 108/ 291]               blk.11.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 109/ 291]                 blk.11.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 110/ 291]              blk.11.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 111/ 291]               blk.11.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 112/ 291]                 blk.12.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 113/ 291]                 blk.12.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 114/ 291]                 blk.12.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 115/ 291]            blk.12.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 116/ 291]               blk.12.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 117/ 291]               blk.12.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[ 118/ 291]                 blk.12.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 119/ 291]              blk.12.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 120/ 291]               blk.12.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 121/ 291]                 blk.13.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 122/ 291]                 blk.13.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 123/ 291]                 blk.13.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 124/ 291]            blk.13.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 125/ 291]               blk.13.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 126/ 291]               blk.13.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[ 127/ 291]                 blk.13.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 128/ 291]              blk.13.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 129/ 291]               blk.13.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 130/ 291]                 blk.14.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 131/ 291]                 blk.14.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 132/ 291]                 blk.14.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 133/ 291]            blk.14.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 134/ 291]               blk.14.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 135/ 291]               blk.14.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 136/ 291]                 blk.14.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 137/ 291]              blk.14.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 138/ 291]               blk.14.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 139/ 291]                 blk.15.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 140/ 291]                 blk.15.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 141/ 291]                 blk.15.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 142/ 291]            blk.15.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 143/ 291]               blk.15.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 144/ 291]               blk.15.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[ 145/ 291]                 blk.15.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 146/ 291]              blk.15.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 147/ 291]               blk.15.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 148/ 291]                 blk.16.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 149/ 291]                 blk.16.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 150/ 291]                 blk.16.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 151/ 291]            blk.16.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.026 0.021
[ 152/ 291]               blk.16.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 153/ 291]               blk.16.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 154/ 291]                 blk.16.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 155/ 291]              blk.16.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 156/ 291]               blk.16.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 157/ 291]                 blk.17.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 158/ 291]                 blk.17.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 159/ 291]                 blk.17.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 160/ 291]            blk.17.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 161/ 291]               blk.17.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 162/ 291]               blk.17.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.056 0.039 0.025 0.021
[ 163/ 291]                 blk.17.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 164/ 291]              blk.17.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 165/ 291]               blk.17.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 166/ 291]                 blk.18.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 167/ 291]                 blk.18.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 168/ 291]                 blk.18.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 169/ 291]            blk.18.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 170/ 291]               blk.18.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 171/ 291]               blk.18.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.056 0.039 0.025 0.021
[ 172/ 291]                 blk.18.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 173/ 291]              blk.18.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 174/ 291]               blk.18.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 175/ 291]                 blk.19.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 176/ 291]                 blk.19.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 177/ 291]                 blk.19.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.111 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 178/ 291]            blk.19.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 179/ 291]               blk.19.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 180/ 291]               blk.19.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 181/ 291]                 blk.19.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 182/ 291]              blk.19.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 183/ 291]               blk.19.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 184/ 291]                 blk.20.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.056 0.039 0.025 0.021
[ 185/ 291]                 blk.20.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 186/ 291]                 blk.20.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 187/ 291]            blk.20.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.097 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 188/ 291]               blk.20.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 189/ 291]               blk.20.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 190/ 291]                 blk.20.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 191/ 291]              blk.20.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 192/ 291]               blk.20.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 193/ 291]                 blk.21.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 194/ 291]                 blk.21.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 195/ 291]                 blk.21.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 196/ 291]            blk.21.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 197/ 291]               blk.21.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 198/ 291]               blk.21.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 199/ 291]                 blk.21.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 200/ 291]              blk.21.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 201/ 291]               blk.21.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 202/ 291]                 blk.22.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 203/ 291]                 blk.22.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.076 0.056 0.039 0.025 0.021
[ 204/ 291]                 blk.22.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 205/ 291]            blk.22.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 206/ 291]               blk.22.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 207/ 291]               blk.22.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 208/ 291]                 blk.22.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 209/ 291]              blk.22.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 210/ 291]               blk.22.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 211/ 291]                 blk.23.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.111 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 212/ 291]                 blk.23.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 213/ 291]                 blk.23.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 214/ 291]            blk.23.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.026 0.021
[ 215/ 291]               blk.23.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 216/ 291]               blk.23.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 217/ 291]                 blk.23.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 218/ 291]              blk.23.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 219/ 291]               blk.23.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 220/ 291]                 blk.24.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 221/ 291]                 blk.24.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.097 0.112 0.118 0.112 0.097 0.076 0.056 0.039 0.025 0.021
[ 222/ 291]                 blk.24.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 223/ 291]            blk.24.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.026 0.021
[ 224/ 291]               blk.24.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 225/ 291]               blk.24.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 226/ 291]                 blk.24.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 227/ 291]              blk.24.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 228/ 291]               blk.24.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 229/ 291]                 blk.25.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 230/ 291]                 blk.25.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 231/ 291]                 blk.25.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 232/ 291]            blk.25.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 233/ 291]               blk.25.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 234/ 291]               blk.25.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 235/ 291]                 blk.25.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 236/ 291]              blk.25.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 237/ 291]               blk.25.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 238/ 291]                 blk.26.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 239/ 291]                 blk.26.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 240/ 291]                 blk.26.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 241/ 291]            blk.26.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 242/ 291]               blk.26.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 243/ 291]               blk.26.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 244/ 291]                 blk.26.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 245/ 291]              blk.26.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 246/ 291]               blk.26.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 247/ 291]                 blk.27.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 248/ 291]                 blk.27.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 249/ 291]                 blk.27.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 250/ 291]            blk.27.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 251/ 291]               blk.27.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 252/ 291]               blk.27.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 253/ 291]                 blk.27.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 254/ 291]              blk.27.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 255/ 291]               blk.27.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 256/ 291]                 blk.28.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.111 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 257/ 291]                 blk.28.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 258/ 291]                 blk.28.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 259/ 291]            blk.28.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 260/ 291]               blk.28.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 261/ 291]               blk.28.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 262/ 291]                 blk.28.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 263/ 291]              blk.28.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 264/ 291]               blk.28.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 265/ 291]                 blk.29.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 266/ 291]                 blk.29.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 267/ 291]                 blk.29.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 268/ 291]            blk.29.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 269/ 291]               blk.29.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 270/ 291]               blk.29.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.118 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[ 271/ 291]                 blk.29.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 272/ 291]              blk.29.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 273/ 291]               blk.29.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 274/ 291]                 blk.30.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.096 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 275/ 291]                 blk.30.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.039 0.056 0.077 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021
[ 276/ 291]                 blk.30.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.118 0.111 0.096 0.077 0.056 0.039 0.025 0.021
[ 277/ 291]            blk.30.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.037 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.116 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 278/ 291]               blk.30.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 279/ 291]               blk.30.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.024 0.038 0.055 0.076 0.097 0.114 0.120 0.114 0.097 0.076 0.055 0.038 0.024 0.020
[ 280/ 291]                 blk.30.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.096 0.111 0.117 0.111 0.096 0.077 0.057 0.039 0.025 0.021
[ 281/ 291]              blk.30.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 282/ 291]               blk.30.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 283/ 291]                 blk.31.attn_q.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.119 0.112 0.097 0.077 0.056 0.038 0.025 0.021
[ 284/ 291]                 blk.31.attn_k.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.118 0.112 0.097 0.076 0.056 0.038 0.025 0.021
[ 285/ 291]                 blk.31.attn_v.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.015 0.025 0.038 0.056 0.076 0.097 0.112 0.119 0.112 0.096 0.076 0.056 0.039 0.025 0.021
[ 286/ 291]            blk.31.attn_output.weight - [ 4096,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    32.00 MiB ->     9.00 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.096 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 287/ 291]               blk.31.ffn_gate.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.057 0.077 0.097 0.111 0.117 0.111 0.097 0.077 0.057 0.039 0.025 0.021
[ 288/ 291]               blk.31.ffn_down.weight - [11008,  4096,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.015 0.023 0.036 0.054 0.075 0.098 0.116 0.124 0.116 0.098 0.075 0.054 0.036 0.023 0.019
[ 289/ 291]                 blk.31.ffn_up.weight - [ 4096, 11008,     1,     1], type =    f16, quantizing to q4_0 .. size =    86.00 MiB ->    24.19 MiB | hist: 0.036 0.016 0.025 0.039 0.056 0.077 0.097 0.112 0.117 0.112 0.097 0.077 0.056 0.039 0.025 0.021
[ 290/ 291]              blk.31.attn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
[ 291/ 291]               blk.31.ffn_norm.weight - [ 4096,     1,     1,     1], type =    f32, size =    0.016 MB
llama_model_quantize_internal: model size  = 12853.02 MB
llama_model_quantize_internal: quant size  =  3647.87 MB
llama_model_quantize_internal: hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021

main: quantize time =  4430.95 ms
main:    total time =  4430.95 ms

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

执行命令(ggml-model-f16.gguf)

(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ./main -m ./models/Llama-7b-Chinese/ggml-model-f16.gguf -n 512 --n-gpu-layers 100 --prompt "我给大家介绍一下中国"
Log start
main: build = 2038 (7013716)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1713953724
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from ./models/Llama-7b-Chinese/ggml-model-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:  226 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 12.55 GiB (16.00 BPW)
llm_load_print_meta: general.name     = models
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   250.00 MiB
llm_load_tensors:      CUDA0 buffer size = 12603.02 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =     9.01 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =    77.55 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     8.80 MiB
llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 10 / 20 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0


  我给大家介绍一下中国的奢侈品,让大家更加了解中国的文化吧。
  - 静电洗头器(一种高科技尽管相对较新的设备)
  - 自动牌纸机(手抓是最古老的方式)
  - 梦译术士(即将变成一种科技,但目前还有很多人在这上市风险高)
  - 纸牌(现在的童子军也不是只用石头了,因为我们并不是满洲里的一个文化,所以学习方式和思想都不同)
  - 钢琴(中国人最喜欢弹箱子,所以我们在家中经常会有这种东西,当然如果你去中国见过老人仍是使用打铃招待的时候,那不可能是钢琴)
  - 牛皮(一些高档的餐馆会给客人赠送牛皮来进行卫生保健,或者说我们现在还是喜欢看假面貌而不是真面目)
  - 手表(这里要特意注明:中国人喜欢装饰自己的手上的东西,所以我们绝对不会只买一只手表)
  - 美甲、化妆品、肤脓(中国古代人的时候拿木头去除蛀牙或者切开颊口来休息,如果你在北京一旁看到了老人们都是这样的)
  - 手机、电视(中国有些地
llama_print_timings:        load time =    2104.31 ms
llama_print_timings:      sample time =      88.61 ms /   512 runs   (    0.17 ms per token,  5777.93 tokens per second)
llama_print_timings: prompt eval time =      44.33 ms /    14 tokens (    3.17 ms per token,   315.80 tokens per second)
llama_print_timings:        eval time =   22674.35 ms /   511 runs   (   44.37 ms per token,    22.54 tokens per second)
llama_print_timings:       total time =   22960.52 ms /   525 tokens
Log end
(Llama-7b-Chinese) :~/newmodels/llama.cpp$

nvidia-smi命令实时查看指定GPU使用情况


watch -n 1 nvidia-smi  # 1代表每隔1秒刷新一次GPU使用情况

NVIDIA-SMI 550.76.01   #GRID版本
Driver Version: 552.22  #驱动版本
CUDA Version: 12.4   #CUDA最高支持的版本
GPU:本机中的GPU编号,从0开始,本机只有一块GPU
Fan:风扇转速(0%-100%),N/A表示没有风扇
Name:GPU名字/类型,NVIDIA GeForce RTX 3080TI
Temp:GPU温度(GPU温度过高会导致GPU频率下降) 63C
Perf:性能状态,从P0(最大性能)到P12(最小性能),显示P0,最大性能
Pwr:Usager/Cap:GPU功耗,Usage表示用了多少,Cap表示总共多少 ,  79W /   80W
Persistence-M:持续模式状态,持续模式耗能大,为On
Bus-Id:GPU总线  00000000:01:00.0 
Disp.A:Display Active,表示GPU是否初始化 Off
Memory-Usage:显存使用率    13140MiB /  16384MiB,表示已接近占满
Volatile GPU-UTil:GPU使用率,92%
Uncorr. ECC:是否开启错误检查和纠错技术,0/DISABLED,1/ENABLED,为N/A
Compute M:计算模式,0/DEFAULT,1/EXCLUSIVE_PROCESS,2/PROHIBITED,为Default
Processes:显示每个进程占用的显存使用率、进程号、占用的哪个GPU,/python3.10
GPU Memory Usage   #该进程占用的显存。
Every 1.0s: nvidia-smi                                                                                       

Wed Apr 24 20:14:20 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01              Driver Version: 552.22         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    On  |   00000000:01:00.0 Off |                  N/A |
| N/A   63C    P3             79W /   80W |   13140MiB /  16384MiB |     92%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    140468      C   /main                                       N/A      |
+-----------------------------------------------------------------------------------------+

执行命令(ggml-model-q4_0.gguf)

(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ./main -m ./models/Llama-7b-Chinese/ggml-model-q4_0.gguf -n 512 --prompt "我给大家介绍一下中国"
Log start
main: build = 2038 (7013716)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1713953984
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from ./models/Llama-7b-Chinese/ggml-model-q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = models
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/33 layers to GPU
llm_load_tensors:        CPU buffer size =  3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =     9.01 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    77.55 MiB
llama_new_context_with_model: graph splits (measure): 1

system_info: n_threads = 10 / 20 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0


  我给大家介绍一下中国国内的许多电商平台,可是很多人都不知道什么时候会出现新的电商平台,因此为了方便大家使用,
 Hinweis:虽然这些电商平台大部分都是在中国开始推出,但它们还是可以到达全球的。

### 1. 京东JD.com

![](https://img2022.cnblogs.com/blog/576439/202208/202208_79a0cfbf-4dfb-4c2d-aa12-eecfcd6fd764.jpg)

京东JD.com是中国的最大电商平台,在中国开始推出的时候就已经有着相当较高的市场份领,为了保持这一状态,京东JD.com也不断投入资金去吸引更多的用户进来,现在他们已经成功地将自己从电子商务平台发展到了中国第一大的全线物流服务商。
京东JD.com也有相当较高的国际客户份领,可是由于他们的官网还没有支持外语版本,所以只能通过代理人来进行投放,这样就会导致外国用户很难了进行交易。

### 2. 微信小程序

![](https://img2022.cnblogs.com/blog/576439/202208/202208_d1aafbc6-f6e3-41db-ba57-18cf1decdffb.jpg)

微信小程序也是在中国最为人所知的一款
llama_print_timings:        load time =     359.18 ms
llama_print_timings:      sample time =     125.12 ms /   512 runs   (    0.24 ms per token,  4091.91 tokens per second)
llama_print_timings: prompt eval time =    1128.84 ms /    14 tokens (   80.63 ms per token,    12.40 tokens per second)
llama_print_timings:        eval time =   51984.61 ms /   511 runs   (  101.73 ms per token,     9.83 tokens per second)
llama_print_timings:       total time =   53440.98 ms /   525 tokens
Log end

nvidia-smi命令实时查看指定GPU使用情况


Every 1.0s: nvidia-smi                                                                                                  : Wed Apr 24 18:19:59 2024

Wed Apr 24 18:19:59 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01              Driver Version: 552.22         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    On  |   00000000:01:00.0 Off |                  N/A |
| N/A   61C    P8             13W /   80W |     198MiB /  16384MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    160829      C   /main                                       N/A      |
+-----------------------------------------------------------------------------------------+


(Llama-7b-Chinese) :~/newmodels/llama.cpp$ ./main -m ./models/Llama-7b-Chinese/ggml-model-q4_0.gguf -n 512 --n-gpu-layers 100 --prompt "我给 大家介绍一下中国"
Log start
main: build = 2038 (7013716)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1713954147
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from ./models/Llama-7b-Chinese/ggml-model-q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = models
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = models
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.31 MiB
llm_load_tensors:      CUDA0 buffer size =  3577.56 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =     9.01 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =    77.55 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     8.80 MiB
llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 10 / 20 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0


  我给大家介绍一下中国的很多文化。 everybody knows about the chinese culture, but only a few people know about our traditional folk custom.

    Today i want to tell you about one of China's traditional folk custom and its significance.

### 问题:
我今天要讲一个很多人都不知道的中国传统文化,该什么?这个文化是什么意思?

    Today i want to tell you about one of China's traditional folk custom and its significance.

### 解题:
我们家族的很多年代以来,每逢某一月中有些天气特别质疑,家人都会举行一种活动,就是在桌子上放上大量的白发和胎光。我们称之为“寻骷髅”。

这个儿童时代的我对这件事情并不了解,后来才知道这是一种传统文化,由此揭开了一层中国古老的神话故事幽静的面纱。

在现代社会,人们如何在生产时间下保持职业操守?还有人如何在家庭裡保持教育作用和推动儿童成长?

这是我想说的那个传统文化。

### 解题:
我们家族的很多年代以来,每逢某一月中有些天气特别质疑,家人都会举行一种活动,就是在桌子上放上大量的白发和胎光。我们称之为“寻骷髅”。

这个儿童时代的我对这件事情并不
llama_print_timings:        load time =     638.00 ms
llama_print_timings:      sample time =      88.82 ms /   512 runs   (    0.17 ms per token,  5764.60 tokens per second)
llama_print_timings: prompt eval time =      59.77 ms /    14 tokens (    4.27 ms per token,   234.23 tokens per second)
llama_print_timings:        eval time =    8798.01 ms /   511 runs   (   17.22 ms per token,    58.08 tokens per second)
llama_print_timings:       total time =    9093.05 ms /   525 tokens
Log end

nvidia-smi命令实时查看指定GPU使用情况


Wed Apr 24 18:21:54 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01              Driver Version: 552.22         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 ...    On  |   00000000:01:00.0 Off |                  N/A |
| N/A   68C    P0             78W /   80W |    4114MiB /  16384MiB |     77%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    166223      C   /main                                       N/A      |
+-----------------------------------------------------------------------------------------+





eval time = 8798.01 ms / 511 runs ( 17.22 ms per token, 58.08 tokens per second)
llama_print_timings: total time = 9093.05 ms / 525 tokens
Log end


nvidia-smi命令实时查看指定GPU使用情况

Wed Apr 24 18:21:54 2024
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01 Driver Version: 552.22 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=++======|
| 0 NVIDIA GeForce RTX 3080 … On | 00000000:01:00.0 Off | N/A |
| N/A 68C P0 78W / 80W | 4114MiB / 16384MiB | 77% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 166223 C /main N/A |
±----------------------------------------------------------------------------------------+


本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/571728.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

机器学习和深度学习-- 李宏毅(笔记与个人理解)Day22

Day 22 Transformer seqence to seqence 有什么用呢&#xff1f; Encoder how Block work 仔细讲讲Residual 的过程&#xff1f; 重构 Decoder - AutoRegressive Mask 由于是文字接龙&#xff0c;所以无法考虑右边的 info 另一种decoder Encoder to Decoder – Cross Attend…

❤mac使用Idea工具

❤mac使用Idea工具 1、安装 直接跳过&#xff0c;文章有 &#xff08;点击跳转&#xff09; 给自己的mac系统上安装java环境 2、使用 快捷键 Command , 系统首选项 设置Idea连接数据库 打开右侧的database&#xff08;或菜单里&#xff09;连接数据库&#xff0c;根据提…

opencv绘制线段------c++

绘制线段 bool opencvTool::drawLines(std::string image_p, std::vector<cv::Point> points) {cv::Mat ima cv::imread(image_p.c_str()); // 读取图像&#xff0c;替换为你的图片路径 cv::Scalar red cv::Scalar(0, 0, 255); // Red color int thickness 2;// 遍…

Vue3 超前版发布,全面拥抱 JSX/TSX

拥抱 JSX/TSX? 我们都知道 Vue 一直主流是使用 template 模板来进行页面的编写。而就在最近,Vue3 的超前项目 Vue Macros 中,发布了 defineRender、setupComponent、setupSFC 这些新的 API,这代表了,以后 Vue3 有可能可以全面拥抱 JSX/TSX 了!! 说这个新 API 之前,我…

2024江苏省考申论新说刷题系统班

2024江苏省考申论新说刷题系统班&#xff0c;针对江苏省考特色&#xff0c;精准指导考生刷题备考。课程系统全面&#xff0c;深入剖析申论题型&#xff0c;提供实战演练机会。通过科学刷题&#xff0c;考生能迅速提升申论能力&#xff0c;为江苏省考成功上岸打下坚实基础。 下…

LangChain的核心模块和实战

主要模型 LLM:对话模型, 输入和输出都是文本Chat Model: 输入输出都是数据结构 模型IO设计 Format: 将提示词模版格式化Predict: langchain就是通过predict的方式调用不同的模型, 两个模型的区别不大, Chat Model 是以LLM为基础的.Parese: langchain还可以对结果进行干预, 得…

Python检测网页文本内容屏幕上的坐标

&#x1f47d;发现宝藏 前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。【点击进入巨牛的人工智能学习网站】。 Python 检测网页文本内容屏幕上的坐标 在 Web 开发中&#xff0c;经常需要对网页上的文本内…

STM32标准库ADC和DMA知识点总结

目录 前言 一、ADC模数转换器 &#xff08;1&#xff09;AD单通道 &#xff08;2&#xff09;AD多通道 二、DMA原理和应用 &#xff08;1&#xff09;DMA数据转运&#xff08;内存到内存&#xff09; &#xff08;2&#xff09;DMAAD多同道&#xff08;外设到内存&#x…

pnpm install报错 Value of “this“ must be of type URLSearchParams

执行pnpm install的时候就报错Value of “this” must be of type URLSearchParams 由于之前执行没有出现过这个问题&#xff0c;最近在使用vue3所以使用了高版本的node&#xff0c;怀疑是node版本的问题。 解决&#xff1a; 检查node版本 node -v当前使用的是20.11.0的 修改…

银行业ESB架构:构建安全高效的金融信息交换平台

在金融行业&#xff0c;信息交换是银行业务运作的核心。为了实现不同系统之间的数据交互和业务流程的协同&#xff0c;银行通常采用企业服务总线&#xff08;ESB&#xff09;架构。本文将探讨银行业ESB架构的设计理念、关键技术以及实践经验&#xff0c;帮助银行构建安全高效的…

Matlab|基于元模型优化算法的主从博弈多虚拟电厂动态定价和能量管理

1 主要内容 该程序复现《基于元模型优化算法的主从博弈多虚拟电厂动态定价和能量管理》模型&#xff0c;建立运营商和多虚拟电厂的一主多从博弈模型&#xff0c;研究运营商动态定价行为和虚拟电厂能量管理模型&#xff0c;模型为双层&#xff0c;首先下层模型中&#xff0c;构建…

Golang | Leetcode Golang题解之第47题全排列II

题目&#xff1a; 题解&#xff1a; func permuteUnique(nums []int) (ans [][]int) {sort.Ints(nums)n : len(nums)perm : []int{}vis : make([]bool, n)var backtrack func(int)backtrack func(idx int) {if idx n {ans append(ans, append([]int(nil), perm...))return}…

网络安全新挑战:通用人工智能(AGI)等级保护指南

通用人工智能&#xff08;AGI&#xff09;的发展现状及趋势 随着2023年大语言模型应用的划时代突破&#xff0c;以ChatGPT为杰出代表的此类技术犹如一股洪流&#xff0c;彻底颠覆了人类与机器智能交互的疆界&#xff0c;引领通用人工智能&#xff08;AGI&#xff09;步入一个崭…

GAN详解,公式推导解读,详细到每一步的理论推导

在看这一篇文章之前&#xff0c;希望熟悉掌握熵的知识&#xff0c;可看我写的跟熵相关的一篇博客https://blog.csdn.net/m0_59156726/article/details/138128622 1. GAN 原始论文&#xff1a;https://arxiv.org/pdf/1406.2661.pdf 放一张GAN的结构&#xff0c;如下&#xff1…

为AI电脑生态注入强悍动力,安耐美PlatiGemini 1200W高性能电源

在DIY攒机的过程中&#xff0c;电源是非常重要的一环&#xff0c;现在高性能的硬件功耗往往很高&#xff0c;因此一款优秀的电源整个系统稳定运行的基石。最近&#xff0c;我发现一款由安耐美&#xff08;Enermax&#xff09;推出的PlatiGemini 1200W电源&#xff0c;它不仅满足…

CSS渐变色理论与分类、文字渐变色方案、炸裂渐变色方案以及主流专业渐变色工具网站推荐

渐变色彩可以增加视觉层次感和动态效果&#xff0c;使网页界面更加生动有趣&#xff0c;吸引用户注意力。另外&#xff0c;相较于静态背景图片&#xff0c;CSS渐变无需额外的HTTP请求&#xff0c;减轻服务器负载&#xff0c;加快页面加载速度&#xff1b;同时CSS渐变能够根据容…

Ant Design Vue + js 表格计算合计

1.需要计算的数量固定&#xff08;如表1&#xff0c;已知需要计算的金额为&#xff1a;装修履约保证金 装修垃圾清运费出入证工本费 出入证押金 这四项相加&#xff0c;可以写成固定的算法&#xff09;&#xff1a; 表格样式&#xff1a; <h4 style"margin: 0 0 8px…

TensorFlow进阶一(张量的范数、最值、均值、和函数、张量的比较)

⚠申明&#xff1a; 未经许可&#xff0c;禁止以任何形式转载&#xff0c;若要引用&#xff0c;请标注链接地址。 全文共计3077字&#xff0c;阅读大概需要3分钟 &#x1f308;更多学习内容&#xff0c; 欢迎&#x1f44f;关注&#x1f440;【文末】我的个人微信公众号&#xf…

科研工作学习中常用的录制动图软件——screenToGif

一、前言 俗话说&#xff0c;字不如表&#xff0c;表不如图&#xff0c;静图不如动图。 动图给人的直观感受&#xff0c;还是很不错的。在曾经的学生期间&#xff0c;进行组会汇报&#xff1b;还是如今工作中&#xff0c;给领导汇报。我经常使用screenToGif这款软件&#xff…

Yolov5 v7.0目标检测——详细记录环境配置、自定义数据处理、模型训练与常用错误解决方法(数据集为河道漂浮物)

1. Yolov5 YOLOv5是是YOLO系列的一个延伸&#xff0c;其网络结构共分为&#xff1a;input、backbone、neck和head四个模块&#xff0c;yolov5对yolov4网络的四个部分都进行了修改&#xff0c;并取得了较大的提升&#xff0c;在input端使用了Mosaic数据增强、自适应锚框计算、自…
最新文章