AI-Ollama-Client

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

modifying or distributing the Package, you accept this license. Do not
use, modify, or distribute the Package, if you do not accept this
license.

(11)  If your Modified Version has been derived from a Modified
Version made by someone other than you, you are nevertheless required
to ensure that your Modified Version complies with the requirements of
this license.

(12)  This license does not grant you the right to use any trademark,
service mark, tradename, or logo of the Copyright Holder.

(13)  This license includes the non-exclusive, worldwide,
free-of-charge patent license to make, have made, use, offer to sell,
sell, import and otherwise transfer the Package with respect to any
patent claims licensable by the Copyright Holder that are necessarily
infringed by the Package. If you institute patent litigation
(including a cross-claim or counterclaim) against any party alleging
that the Package constitutes direct or contributory patent
infringement, then this Artistic License to you shall terminate on the
date that such litigation is filed.

MANIFEST.SKIP  view on Meta::CPAN

^\.github
^\.prove$
Makefile$
^blib
^pm_to_blib
^.*.bak
^.*.old
^t.*sessions
^t/.*\.disabled$
^cover_db
^.*\.log
^.*\.swp$
^jar/
^cpan/
^MYMETA
^.releaserc
^.*.cmd
^AI-Ollama-Client
^frame-\d+.png
^demo/
^\.carmel/

lib/AI/Ollama/RequestOptions.pm  view on Meta::CPAN


Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

=cut

has 'frequency_penalty' => (
    is       => 'ro',
    isa      => Num,
);

=head2 C<< logits_all >>

Enable logits all. (Default: false)

=cut

has 'logits_all' => (
    is       => 'ro',
);

=head2 C<< low_vram >>

Enable low VRAM mode. (Default: false)

=cut

has 'low_vram' => (

lib/AI/Ollama/RequestOptions.pm  view on Meta::CPAN


=cut

has 'num_predict' => (
    is       => 'ro',
    isa      => Int,
);

=head2 C<< num_thread >>

Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores).

=cut

has 'num_thread' => (
    is       => 'ro',
    isa      => Int,
);

=head2 C<< numa >>

ollama/ollama-curated.json  view on Meta::CPAN

{"openapi":"3.0.3","components":{"schemas":{"PushModelResponse":{"properties":{"total":{"type":"integer","description":"total size of the model","example":"2142590208"},"status":{"$ref":"#/components/schemas/PushModelStatus"},"digest":{"example":"sha...

ollama/ollama-curated.yaml  view on Meta::CPAN

          description: |
            The GPU to use for the main model. Default is 0.
        low_vram:
          type: boolean
          description: |
            Enable low VRAM mode. (Default: false)
        f16_kv:
          type: boolean
          description: |
            Enable f16 key/value. (Default: false)
        logits_all:
          type: boolean
          description: |
            Enable logits all. (Default: false)
        vocab_only:
          type: boolean
          description: |
            Enable vocab only. (Default: false)
        use_mmap:
          type: boolean
          description: |
            Enable mmap. (Default: false)
        use_mlock:
          type: boolean

ollama/ollama-curated.yaml  view on Meta::CPAN

          description: |
            The base of the rope frequency scale. (Default: 1.0)
        rope_frequency_scale:
          type: number
          format: float
          description: |
            The scale of the rope frequency. (Default: 1.0)
        num_thread:
          type: integer
          description: |
            Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number o...
    ResponseFormat:
      type: string
      description: |
        The format to return a response in. Currently the only accepted value is json.

        Enable JSON mode by setting the format parameter to json. This will structure the response as valid JSON.

        Note: it's important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
      enum:
        - json

t/generate.request  view on Meta::CPAN

      ""
    ],
    "numa": true,
    "num_ctx": 0,
    "num_batch": 0,
    "num_gqa": 0,
    "num_gpu": 0,
    "main_gpu": 0,
    "low_vram": true,
    "f16_kv": true,
    "logits_all": true,
    "vocab_only": true,
    "use_mmap": true,
    "use_mlock": true,
    "embedding_only": true,
    "rope_frequency_base": 0,
    "rope_frequency_scale": 0,
    "num_thread": 0
  },
  "format": "json",
  "raw": true,

xt/copyright.t  view on Meta::CPAN


sub wanted {
  push @files, $File::Find::name if /\.p(l|m|od)$/;
}

sub collect {
    my( $file ) = @_;
    note $file;
    my $modified_ts;
    if( $is_checkout ) {
        # diag `git log -1 --pretty="format:%ct" "$file"`;
        $modified_ts = `git log -1 --pretty="format:%ct" "$file"`;
    } else {
        $modified_ts = (stat($_))[9];
    }

    my $modified_year;
    if( $modified_ts ) {
        $modified_year = strftime('%Y', localtime($modified_ts));
    } else {
        $modified_year = 1970;
    };



( run in 1.481 second using v1.01-cache-2.11-cpan-49f99fa48dc )