AI-TensorFlow-Libtensorflow

 view release on metacpan or  search on metacpan

CONTRIBUTING  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
<<<=== COPYRIGHT CONTRIBUTIONS ===>>>
 
[ BEGIN, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
 
By contributing to this repository, you agree that any and all such Contributions and derivative works thereof shall immediately become part of the APTech Family of software and documentation, and you accept and agree to the following legally-binding...
 
1. Definitions.
 
"You" or "Your" shall mean the copyright owner, or legal entity authorized by the copyright owner, that is making this Agreement.  For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are und...
 
"APTech" is defined as the Delaware corporation named Auto-Parallel Technologies, Inc. with a primary place of business in Cedar Park, Texas, USA.
 
The "APTech Family of software and documentation" (hereinafter the "APTech Family") is defined as all copyrightable works identified as "part of the APTech Family" immediately following their copyright notice, and includes but is not limited to this ...
 
"Team APTech" is defined as all duly-authorized contributors to the APTech Family, including You after making Your first Contribution to the APTech Family under the terms of this Agreement.
 
"Team APTech Leadership" is defined as all duly-authorized administrators and official representatives of the APTech Family, as listed publicly on the most up-to-date copy of the AutoParallel.com website.
 
"Contribution" shall mean any original work of authorship, including any changes or additions or enhancements to an existing work, that is intentionally submitted by You to this repository for inclusion in, or documentation of, any of the products or...
 
2. Assignment of Copyright.  Subject to the terms and conditions of this Agreement, and for good and valuable consideration, receipt of which You acknowledge, You hereby transfer to the Delaware corporation named Auto-Parallel Technologies, Inc. with...
 
You hereby agree that if You have or acquire hereafter any patent or interface copyright or other intellectual property interest dominating the software or documentation contributed to by the Work (or use of that software or documentation), such domi...
 
You hereby represent and warrant that You are the sole copyright holder for the Work and that You have the right and power to enter into this legally-binding contractual agreement.  You hereby indemnify and hold harmless APTech, its heirs, assignees,...
 
3. Grant of Patent License.  Subject to the terms and conditions of this Agreement, You hereby grant to APTech and to recipients of software distributed by APTech a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as ...
 
4. You represent that you are legally entitled to assign the above copyright and grant the above patent license.  If your employer(s) or contractee(s) have rights to intellectual property that you create that includes your Contributions, then you rep...
 
5. You represent that each of Your Contributions is Your original creation and is not subject to any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which ...
 
6. You agree to submit written notification to Team APTech's Leadership of any facts or circumstances of which you become aware that would make the representations of this Agreement inaccurate in any respect.
 
[ END, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
 
 
<<<=== LEGAL OVERVIEW ===>>>
 
All APTech Family software and documentation is legally copyrighted by Auto-Parallel Technologies, Inc.
 
To maintain the legal integrity and defensibility of the APTech Family of software and documentation, all contributors to the APTech Family must assign copyright ownership to Auto-Parallel Technologies, Inc. under the terms of the APTech Family Copyr...

CONTRIBUTING  view on Meta::CPAN

45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
Why The FSF Gets Copyright Assignments From Contributors
By Professor Eben Moglen, Columbia University Law School
Copyright © 2001, 2008, 2009, 2014 Free Software Foundation, Inc.
The quoted text below is not modified, and is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.
 
"Under US copyright law, which is the law under which most free software programs have historically been first published, there are very substantial procedural advantages to registration of copyright.  And despite the broad right of distribution conv...
 
In order to make sure that all of our copyrights can meet the recordkeeping and other requirements of registration, and in order to be able to enforce the GPL most effectively, FSF requires that each author of code incorporated in FSF projects provid...
 
 
<<<=== COMMITMENT TO FREE & OPEN SOURCE SOFTWARE ===>>>
 
Auto-Parallel Technologies, Inc. is committed to maintaining the free-and-open-source software (FOSS) basis of the APTech Family.
 
If your APTech Family contribution is accepted and merged into an official APTech Family source repository, then your contribution is automatically published online with FOSS licensing, currently the Apache License Version 2.0.
 
 
<<<=== EMPLOYER COPYRIGHT DISCLAIMER AGREEMENT ===>>>
 
The file named EMPLOYERS.pdf contains the Employer Copyright Disclaimer Agreement.  If you are employed or work as an independent contractor, and either your job involves computer programming or you have executed an agreement giving your employer or ...
 
 
<<<=== OTHER CONTRIBUTORS ===>>>
 
If anyone other than yourself has written software source code or documentation as part of your APTech Family contribution, then they must submit their contributions themselves under the terms of the APTech Family Copyright Assignment Agreement above...
 
Please be sure you DO NOT STUDY OR INCLUDE any 3rd-party or public-domain intellectual property as part of your APTech Family contribution, including but not limited to: source code; documentation; copyrighted, trademarked, or patented components; or...
 
 
<<<=== RECOGNITION ===>>>
 
Once we have received your contribution under the terms of the APTech Family Copyright Assignment Agreement above, as well as any necessary Employer Copyright Disclaimer Agreement(s), then we will begin the process of reviewing any software pull requ...
 
 
<<<=== SUBMISSION ===>>>
 
When you are ready to submit the signed agreement(s), please answer the following 12 questions about yourself and your APTech Family contribution, then include your answers in the body of your e-mail or on a separate sheet of paper in snail mail, and...
 
1.  Full Legal Name
2.  Preferred Pseudonym (or "none")
3.  Country of Citizenship
4.  Date of Birth (spell full month name)
5.  Snail Mail Address (include country)
6.  E-Mail Address
7.  Names of APTech Family Files Modified (or "none")
8.  Names of APTech Family Files Created (or "none")
9.  Current Employer(s) or Contractee(s) (or "none")
10. Does Your Job Involve Computer Programming? (or "not applicable")
11. Does Your Job Involve an IP Ownership Agreement? (or "not applicable")
12. Name(s) & Employer(s) of Additional Contributors (or "none")
 
Snail Mail Address:
 
Auto-Parallel Technologies, Inc.
[ CONTACT VIA E-MAIL BELOW FOR STREET ADDRESS ]
Cedar Park, TX, USA, 78613
 
E-Mail Address (Remove "NOSPAM." Before Sending):
 
william.braswell at NOSPAM.autoparallel.com
 
THANKS FOR CONTRIBUTING!  :-)

COPYRIGHT  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
12
AI::TensorFlow::Libtensorflow is Copyright © 2022 Auto-Parallel Technologies, Inc.
All rights reserved.
 
AI::TensorFlow::Libtensorflow is part of the APTech Family of software and documentation.
 
This program is free software; you can redistribute it and/or modify
it under the terms of the Apache License Version 2.0.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
Apache License Version 2.0 for more details.

Changes  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
12
13
0.0.7 2023-10-05 01:27:42-0400
 
  Features
 
 
  Refactoring
 
   - Add timer to the notebooks to time the inference steps. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/17>.
 
  Documentation
 
   - Add information about installing GPU version of `libtensorflow` either on

Changes  view on Meta::CPAN

19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
     Update the CI to additionally build the GPU Docker image. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/16>.
 
0.0.6 2023-01-30 15:22:04-0500
 
  - Documentation
 
      - Fix NAME for Notebook POD.
 
0.0.5 2023-01-30 11:46:31-0500
 
  - Features
 
      - Docker images with dependencies for notebooks.
      - Support for running notebooks in Binder.
 
  - Documentation
 
      - Add manual index and quickstart guide.
      - Add InferenceUsingTFHubEnformerGeneExprPredModel tutorial.
 
0.0.4 2022-12-21 15:57:53-0500
 
  - Features
 
      - Add Data::Printer and stringification support for several classes.
      - Add `::TFLibrary` class. Move `GetAllOpList()` method there.
 
  - Documentation
 
      - Add InferenceUsingTFHubMobileNetV2Model tutorial.
 
0.0.3 2022-12-15 10:46:52-0500
 
  - Features
 
      - Add more testing of basic API. Complete port of "(CAPI, *)" tests
        from upstream `tensorflow/c/c_api_test.cc`.
 
0.0.2 2022-11-28 14:33:33-0500
 
  - Features
 
      - Explicit support for minimum Perl v5.14.
 
0.0.1 2022-11-25 11:43:37-0500
 
  Features
 
    - First release.

LICENSE  view on Meta::CPAN

17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
 
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
 
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
 
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
 
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
 
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
 
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
 
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
 
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

LICENSE  view on Meta::CPAN

142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
   with Licensor regarding such Contributions.
 
6. Trademarks. This License does not grant permission to use the trade
   names, trademarks, service marks, or product names of the Licensor,
   except as required for reasonable and customary use in describing the
   origin of the Work and reproducing the content of the NOTICE file.
 
7. Disclaimer of Warranty. Unless required by applicable law or
   agreed to in writing, Licensor provides the Work (and each
   Contributor provides its Contributions) on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
   implied, including, without limitation, any warranties or conditions
   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
   PARTICULAR PURPOSE. You are solely responsible for determining the
   appropriateness of using or redistributing the Work and assume any
   risks associated with Your exercise of permissions under this License.
 
8. Limitation of Liability. In no event and under no legal theory,
   whether in tort (including negligence), contract, or otherwise,
   unless required by applicable law (such as deliberate and grossly
   negligent acts) or agreed to in writing, shall any Contributor be
   liable to You for damages, including any direct, indirect, special,
   incidental, or consequential damages of any character arising as a
   result of this License or out of the use or inability to use the
   Work (including but not limited to damages for loss of goodwill,
   work stoppage, computer failure or malfunction, or any and all
   other commercial damages or losses), even if such Contributor
   has been advised of the possibility of such damages.
 
9. Accepting Warranty or Additional Liability. While redistributing
   the Work or Derivative Works thereof, You may choose to offer,
   and charge a fee for, acceptance of support, warranty, indemnity,
   or other liability obligations and/or rights consistent with this
   License. However, in accepting such obligations, You may act only
   on Your own behalf and on Your sole responsibility, not on behalf
   of any other Contributor, and only if You agree to indemnify,
   defend, and hold each Contributor harmless for any liability
   incurred by, or claims asserted against, such Contributor by reason
   of your accepting any such warranty or additional liability.
 
END OF TERMS AND CONDITIONS
 
APPENDIX: How to apply the Apache License to your work.
 
   To apply the Apache License to your work, attach the following

LICENSE  view on Meta::CPAN

195
196
197
198
199
200
201
202
203
204
205
206
207
Copyright 2022 Auto-Parallel Technologies, Inc
 
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
 
 
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

META.json  view on Meta::CPAN

19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
      "examples",
      "inc",
      "share",
      "t",
      "xt",
      "maint"
   ]
},
"prereqs" : {
   "configure" : {
      "requires" : {
         "ExtUtils::MakeMaker" : "0",
         "perl" : "5.014"
      }
   },
   "develop" : {
      "requires" : {
         "Moose" : "0",
         "Moose::Role" : "0",
         "Pod::Simple::Search" : "0",
         "Test::More" : "0.88",
         "Test::Perl::Critic" : "0",
         "Test::Pod::LinkCheck::Lite" : "0",
         "Test::Pod::Snippets" : "0",
         "Test::Pod::Snippets::Parser" : "0",
         "With::Roles" : "0"
      },

META.json  view on Meta::CPAN

53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
      "Module::Runtime" : "0",
      "Mu" : "0",
      "Path::Tiny" : "0",
      "Sort::Key::Multi" : "0",
      "Sub::Uplevel" : "0",
      "Syntax::Construct" : "0",
      "Types::Path::Tiny" : "0"
   }
},
"runtime" : {
   "requires" : {
      "Alien::Libtensorflow" : "0",
      "Class::Tiny" : "0",
      "Const::Exporter" : "0",
      "Const::Fast" : "0",
      "Devel::StrictMode" : "0",
      "Exporter::Tiny" : "0",
      "FFI::C" : "0.12",
      "FFI::C::ArrayDef" : "0",
      "FFI::C::StructDef" : "0",
      "FFI::CheckLib" : "0.28",

META.json  view on Meta::CPAN

97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
            "perl" : "5.014",
            "strict" : "0",
            "warnings" : "0"
         },
         "suggests" : {
            "Data::Printer" : "0",
            "PDL" : "0"
         }
      },
      "test" : {
         "requires" : {
            "Data::Dumper" : "0",
            "PDL" : "0",
            "PDL::Core" : "0",
            "Path::Tiny" : "0",
            "Test2::V0" : "0",
            "Test::More" : "0",
            "aliased" : "0",
            "lib" : "0",
            "perl" : "5.014"
         }
      }
   },
   "release_status" : "stable",
   "resources" : {
      "repository" : {
         "type" : "git",
      }
   },
   "version" : "0.0.7",
   "x_generated_by_perl" : "v5.26.1",
   "x_serialization_backend" : "Cpanel::JSON::XS version 4.37",
   "x_spdx_expression" : "Apache-2.0"
}

META.yml  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
abstract: 'Bindings for Libtensorflow deep learning library'
author:
  - 'Zakariyya Mughal <zmughal@cpan.org>'
build_requires:
  Data::Dumper: '0'
  PDL: '0'
  PDL::Core: '0'
  Path::Tiny: '0'
  Test2::V0: '0'
  Test::More: '0'
  aliased: '0'
  lib: '0'
  perl: '5.014'
configure_requires:
  ExtUtils::MakeMaker: '0'
  perl: '5.014'
dynamic_config: 0
generated_by: 'Dist::Zilla version 6.030, CPAN::Meta::Converter version 2.150010'
license: apache
meta-spec:
  version: '1.4'
name: AI-TensorFlow-Libtensorflow
no_index:
  directory:
    - eg
    - examples
    - inc
    - share
    - t
    - xt
    - maint
requires:
  Alien::Libtensorflow: '0'
  Class::Tiny: '0'
  Const::Exporter: '0'
  Const::Fast: '0'
  Devel::StrictMode: '0'
  Exporter::Tiny: '0'
  FFI::C: '0.12'
  FFI::C::ArrayDef: '0'
  FFI::C::StructDef: '0'
  FFI::CheckLib: '0.28'

META.yml  view on Meta::CPAN

61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
  Types::Common: '0'
  Types::Standard: '0'
  base: '0'
  constant: '0'
  feature: '0'
  namespace::autoclean: '0'
  overload: '0'
  perl: '5.014'
  strict: '0'
  warnings: '0'
resources:
version: 0.0.7
x_generated_by_perl: v5.26.1
x_serialization_backend: 'YAML::Tiny version 1.74'
x_spdx_expression: Apache-2.0

dist.ini  view on Meta::CPAN

43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
;; For xt/author/pod-linkcheck.t
; authordep Test::Pod::LinkCheck::Lite
;; For xt/author/pod-snippets.t
; authordep Test::Pod::Snippets
; authordep Pod::Simple::Search
; authordep With::Roles
 
[Test::Perl::Critic]
; authordep Perl::Critic::Community
 
[Prereqs / RuntimeRequires]
; Needs Perl v5.14 for Feature::Compat::Defer
perl = 5.014
FFI::Platypus = 2.00
FFI::C = 0.12
FFI::CheckLib = 0
FFI::Platypus::Type::Enum = 0
FFI::Platypus::Type::PtrObject = 0
 
[Prereqs / RuntimeSuggests]
PDL = 0

lib/AI/TensorFlow/Libtensorflow/ApiDefMap.pm  view on Meta::CPAN

6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
 
$ffi->attach( [ 'NewApiDefMap' => 'New' ] => [
        arg 'TF_Buffer' => 'op_list_buffer',
        arg 'TF_Status' => 'status',
] => 'TF_ApiDefMap' => sub {
        my ($xs, $class, @rest) = @_;
        $xs->(@rest);
});
 
$ffi->attach( ['DeleteApiDefMap' => 'DESTROY'] => [
        arg 'TF_ApiDefMap' => 'apimap'
] => 'void');
 
$ffi->attach( [ 'ApiDefMapPut' => 'Put' ] => [
        arg 'TF_ApiDefMap' => 'api_def_map',
        arg 'tf_text_buffer' => [qw(text text_len)],
        arg 'TF_Status' => 'status',

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
        my $opaque = $ffi->cast('data_deallocator_t', 'opaque', $closure);
        $self->_data_deallocator( $opaque );
}
 
 
$ffi->attach( [ 'NewBuffer' => 'New' ] => [] => 'TF_Buffer' );
 
$ffi->attach( [ 'NewBufferFromString' => 'NewFromString' ] => [
        arg 'tf_buffer_buffer' => [qw(proto proto_len)]
] => 'TF_Buffer' => sub {
        my ($xs, $class, @rest) = @_;
        $xs->(@rest);
});
 
 
$ffi->attach( [ 'DeleteBuffer' => 'DESTROY' ] => [ 'TF_Buffer' ], 'void' );
 
1;
 
__END__
 
=pod

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
=head1 NAME
 
AI::TensorFlow::Libtensorflow::Buffer - Buffer that holds pointer to data with length
 
=head1 SYNOPSIS
 
  use aliased 'AI::TensorFlow::Libtensorflow::Buffer' => 'Buffer';
 
=head1 DESCRIPTION
 
C<TFBuffer> is a data structure that stores a pointer to a block of data, the
length of the data, and optionally a deallocator function for memory
management.
 
This structure is typically used in C<libtensorflow> to store the data for a
serialized protocol buffer.
 
=head1 CONSTRUCTORS
 
=head2 New

lib/AI/TensorFlow/Libtensorflow/DataType.pm  view on Meta::CPAN

118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
  use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT @DTYPES);
  use List::Util qw(max);
 
  my $dtype = FLOAT;
  is FLOAT->Size, 4, 'FLOAT is 4 bytes large';
  is max(map { $_->Size } @DTYPES), 16,
    'Largest type has sizeof() == 16 bytes';
 
=head1 DESCRIPTION
 
Enum representing native data types used inside of containers such as
L<TFTensor|AI::TensorFlow::Libtensorflow::Lib::Types/TFTensor>.
 
=head1 CONSTANTS
 
=head2 STRING
 
String.
 
=head2 BOOL

lib/AI/TensorFlow/Libtensorflow/DataType.pm  view on Meta::CPAN

209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
=head2 QUINT8
 
8-bit quantized unsigned integer.
 
=head2 QUINT16
 
16-bit quantized unsigned integer.
 
=head2 RESOURCE
 
Handle to a mutable resource.
 
=head2 VARIANT
 
Variant.
 
=head1 METHODS
 
=head2 Size
 
  my $size = $dtype->Size();

lib/AI/TensorFlow/Libtensorflow/Eager/Context.pm  view on Meta::CPAN

4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
use strict;
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
 
$ffi->attach( [ 'NewContext' => 'New' ] => [
        arg TFE_ContextOptions => 'opts',
        arg TF_Status => 'status'
] => 'TFE_Context' => sub {
        my ($xs, $class, @rest) = @_;
        $xs->(@rest);
} );
 
__END__
 
=pod
 
=encoding UTF-8
 
=head1 NAME

lib/AI/TensorFlow/Libtensorflow/Graph.pm  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
# ABSTRACT: A TensorFlow computation, represented as a dataflow graph
$AI::TensorFlow::Libtensorflow::Graph::VERSION = '0.0.7';
use strict;
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

lib/AI/TensorFlow/Libtensorflow/Graph.pm  view on Meta::CPAN

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
        arg 'tf_dims_buffer' => [qw(dims num_dims)],
        arg 'TF_Status' => 'status',
] => 'void');
 
$ffi->attach( ['GraphGetTensorShape' => 'GetTensorShape'] => [
        arg 'TF_Graph' => 'graph',
        arg 'TF_Output' => 'output',
        arg 'tf_dims_buffer' => [qw(dims num_dims)],
        arg 'TF_Status' => 'status',
] => 'void' => sub {
        my ($xs, @rest) = @_;
        my ($graph, $output, $status) = @rest;
        my $dims = [ (0)x($graph->GetTensorNumDims($output, $status)) ];
        $xs->($graph, $output, $dims, $status);
        return $dims;
});
 
$ffi->attach( [ 'GraphGetTensorNumDims' => 'GetTensorNumDims' ] => [
        arg 'TF_Graph' => 'graph',
        arg 'TF_Output' => 'output',
        arg 'TF_Status' => 'status',
] => 'int');

lib/AI/TensorFlow/Libtensorflow/Graph.pm  view on Meta::CPAN

106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
1;
 
__END__
 
=pod
 
=encoding UTF-8
 
=head1 NAME
 
AI::TensorFlow::Libtensorflow::Graph - A TensorFlow computation, represented as a dataflow graph
 
=head1 SYNOPSIS
 
  use aliased 'AI::TensorFlow::Libtensorflow::Graph' => 'Graph';
 
=head1 DESCRIPTION
 
=head1 CONSTRUCTORS
 
=head2 New

lib/AI/TensorFlow/Libtensorflow/ImportGraphDefResults.pm  view on Meta::CPAN

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
use FFI::Platypus::Buffer qw(buffer_to_scalar window);
use List::Util ();
 
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
 
$ffi->attach( [ 'DeleteImportGraphDefResults' => 'DESTROY' ] => [
        arg TF_ImportGraphDefResults => 'results',
] => 'void' );
 
$ffi->attach( [ 'ImportGraphDefResultsReturnOutputs' => 'ReturnOutputs' ] => [
        arg TF_ImportGraphDefResults => 'results',
        arg 'int*' => 'num_outputs',
        arg 'opaque*' => { id => 'outputs', type => 'TF_Output_struct_array*' },
] => 'void' => sub {
        my ($xs, $results) = @_;
        my $num_outputs;
        my $outputs_array = undef;
        $xs->($results, \$num_outputs, \$outputs_array);
        return [] if $num_outputs == 0;
 
        my $sizeof_output = $ffi->sizeof('TF_Output');
        window(my $outputs_packed, $outputs_array, $sizeof_output * $num_outputs );
        # due to unpack, these are copies (no longer owned by $results)
        my @outputs = map bless(\$_, "AI::TensorFlow::Libtensorflow::Output"),
                unpack "(a${sizeof_output})*", $outputs_packed;
        return \@outputs;
});
 
$ffi->attach( [ 'ImportGraphDefResultsReturnOperations' => 'ReturnOperations' ] => [
        arg TF_ImportGraphDefResults => 'results',
        arg 'int*' => 'num_opers',
        arg 'opaque*' => { id => 'opers', type => 'TF_Operation_array*' },
] => 'void' => sub {
        my ($xs, $results) = @_;
        my $num_opers;
        my $opers_array = undef;
        $xs->($results, \$num_opers, \$opers_array);
        return [] if $num_opers == 0;
 
        my $opers_array_base_packed = buffer_to_scalar($opers_array,
                $ffi->sizeof('opaque') * $num_opers );
        my @opers = map {
                $ffi->cast('opaque', 'TF_Operation', $_ )
        } unpack "(@{[ AI::TensorFlow::Libtensorflow::Lib::_pointer_incantation ]})*", $opers_array_base_packed;
        return \@opers;
} );
 
$ffi->attach( [ 'ImportGraphDefResultsMissingUnusedInputMappings' => 'MissingUnusedInputMappings' ] => [
    arg TF_ImportGraphDefResults => 'results',
    arg 'int*' => 'num_missing_unused_input_mappings',
    arg 'opaque*' => { id => 'src_names', ctype => 'const char***' },
    arg 'opaque*' => { id => 'src_indexes', ctype => 'int**' },
] => 'void' => sub {
        my ($xs, $results) = @_;
        my $num_missing_unused_input_mappings;
        my $src_names;
        my $src_indexes;
        $xs->($results,
                \$num_missing_unused_input_mappings,
                \$src_names, \$src_indexes
        );
        my $src_names_str   = $ffi->cast('opaque',
                "string[$num_missing_unused_input_mappings]", $src_names);
        my $src_indexes_int = $ffi->cast('opaque',
                "int[$num_missing_unused_input_mappings]", $src_indexes);
        return [ List::Util::zip($src_names_str, $src_indexes_int) ];
});

lib/AI/TensorFlow/Libtensorflow/Manual.pod  view on Meta::CPAN

18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
=item L<AI::TensorFlow::Libtensorflow::Manual::Quickstart>
 
Start here to get an overview of the library.
 
=item L<AI::TensorFlow::Libtensorflow::Manual::GPU>
 
GPU-specific installation and usage information.
 
=item L<AI::TensorFlow::Libtensorflow::Manual::CAPI>
 
Appendix of all C API functions with their signatures. These are linked from
the documentation of individual methods.
 
=back
 
=head1 AUTHOR
 
Zakariyya Mughal <zmughal@cpan.org>
 
=head1 COPYRIGHT AND LICENSE

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
=head2 TF_GraphSetTensorShape
 
=over 2
 
  Sets the shape of the Tensor referenced by `output` in `graph` to
  the shape described by `dims` and `num_dims`.
   
  If the number of dimensions is unknown, `num_dims` must be set to
  -1 and `dims` can be null. If a dimension is unknown, the
  corresponding entry in the `dims` array must be -1.
   
  This does not overwrite the existing shape associated with `output`,
  but merges the input shape with the existing shape.  For example,
  setting a shape of [-1, 2] with an existing shape [2, -1] would set
  a final shape of [2, 2] based on shape merging semantics.
   
  Returns an error into `status` if:
    * `output` is not in `graph`.
    * An invalid shape is being set (e.g., the shape being set
      is incompatible with the existing shape).

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
=head2 TF_GraphGetTensorShape
 
=over 2
 
  Returns the shape of the Tensor referenced by `output` in `graph`
  into `dims`. `dims` must be an array large enough to hold `num_dims`
  entries (e.g., the return value of TF_GraphGetTensorNumDims).
   
  If the number of dimensions in the shape is unknown or the shape is
  a scalar, `dims` will remain untouched. Otherwise, each element of
  `dims` will be set corresponding to the size of the dimension. An
  unknown dimension is represented by `-1`.
   
  Returns an error into `status` if:
    * `output` is not in `graph`.
    * `num_dims` does not match the actual number of dimensions.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphGetTensorShape(TF_Graph* graph,
                                                    TF_Output output,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrFuncName(TF_OperationDescription* desc,
                                                const char* attr_name,
                                                const char* value, size_t length);
 
=head2 TF_SetAttrShape
 
=over 2
 
  Set `num_dims` to -1 to represent "unknown rank".  Otherwise,
  `dims` points to an array of length `num_dims`.  `dims[i]` must be
  >= -1, with -1 meaning "unknown dimension".
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrShape(TF_OperationDescription* desc,
                                             const char* attr_name,
                                             const int64_t* dims, int num_dims);
 
=head2 TF_SetAttrShapeList
 
=over 2
 
  `dims` and `num_dims` must point to arrays of length `num_shapes`.
  Set `num_dims[i]` to -1 to represent "unknown rank".  Otherwise,
  `dims[i]` points to an array of length `num_dims[i]`.  `dims[i][j]`
  must be >= -1, with -1 meaning "unknown dimension".
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrShapeList(TF_OperationDescription* desc,
                                                 const char* attr_name,
                                                 const int64_t* const* dims,
                                                 const int* num_dims,
                                                 int num_shapes);
 
=head2 TF_SetAttrTensorShapeProto
 
=over 2
 
  `proto` must point to an array of `proto_len` bytes representing a
  binary-serialized TensorShapeProto.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProto(
      TF_OperationDescription* desc, const char* attr_name, const void* proto,
      size_t proto_len, TF_Status* status);
 
=head2 TF_SetAttrTensorShapeProtoList
 
=over 2
 
  `protos` and `proto_lens` must point to arrays of length `num_shapes`.
  `protos[i]` must point to an array of `proto_lens[i]` bytes
  representing a binary-serialized TensorShapeProto.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProtoList(
      TF_OperationDescription* desc, const char* attr_name,
      const void* const* protos, const size_t* proto_lens, int num_shapes,
      TF_Status* status);
 
=head2 TF_SetAttrTensor

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
                                                  const char* attr_name,
                                                  TF_Tensor* const* values,
                                                  int num_values,
                                                  TF_Status* status);
 
=head2 TF_SetAttrValueProto
 
=over 2
 
  `proto` should point to a sequence of bytes of length `proto_len`
  representing a binary serialization of an AttrValue protocol
  buffer.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrValueProto(TF_OperationDescription* desc,
                                                  const char* attr_name,
                                                  const void* proto,
                                                  size_t proto_len,
                                                  TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
                                                       TF_Status* status);
 
=head2 TF_OperationGetAttrStringList
 
=over 2
 
  Get the list of strings in the value of the attribute `attr_name`.  Fills in
  `values` and `lengths`, each of which must point to an array of length at
  least `max_values`.
   
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  Ideally, max_values would be set to
  TF_AttrMetadata.list_size and `storage` would be at least
  TF_AttrMetadata.total_size, obtained from TF_OperationGetAttrMetadata(oper,
  attr_name).
   
  Fails if storage_size is too small to hold the requested number of strings.
 
=back
 
  /* From <tensorflow/c/c_api.h> */

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
                                                      const char* attr_name,
                                                      int64_t* value,
                                                      int num_dims,
                                                      TF_Status* status);
 
=head2 TF_OperationGetAttrShapeList
 
=over 2
 
  Fills in `dims` with the list of shapes in the attribute `attr_name` of
  `oper` and `num_dims` with the corresponding number of dimensions. On return,
  for every i where `num_dims[i]` > 0, `dims[i]` will be an array of
  `num_dims[i]` elements. A value of -1 for `num_dims[i]` indicates that the
  i-th shape in the list is unknown.
   
  The elements of `dims` will point to addresses in `storage` which must be
  large enough to hold at least `storage_size` int64_ts.  Ideally, `num_shapes`
  would be set to TF_AttrMetadata.list_size and `storage_size` would be set to
  TF_AttrMetadata.total_size from TF_OperationGetAttrMetadata(oper,
  attr_name).
   
  Fails if storage_size is insufficient to hold the requested shapes.
 
=back
 
  /* From <tensorflow/c/c_api.h> */

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
                                                           const char* attr_name,
                                                           TF_Tensor** values,
                                                           int max_values,
                                                           TF_Status* status);
 
=head2 TF_OperationGetAttrValueProto
 
=over 2
 
  Sets `output_attr_value` to the binary-serialized AttrValue proto
  representation of the value of the `attr_name` attr of `oper`.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrValueProto(
      TF_Operation* oper, const char* attr_name, TF_Buffer* output_attr_value,
      TF_Status* status);
 
=head2 TF_OperationGetNumAttrs

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Operation* TF_GraphNextOperation(TF_Graph* graph,
                                                            size_t* pos);
 
=head2 TF_GraphToGraphDef
 
=over 2
 
  Write out a serialized representation of `graph` (as a GraphDef protocol
  message) to `output_graph_def` (allocated by TF_NewBuffer()).
  `output_graph_def`'s underlying buffer will be freed when TF_DeleteBuffer()
  is called.
   
  May fail on very large graphs in the future.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphToGraphDef(TF_Graph* graph,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddControlDependency(
      TF_ImportGraphDefOptions* opts, TF_Operation* oper);
 
=head2 TF_ImportGraphDefOptionsAddReturnOutput
 
=over 2
 
  Add an output in `graph_def` to be returned via the `return_outputs` output
  parameter of TF_GraphImportGraphDef(). If the output is remapped via an input
  mapping, the corresponding existing tensor in `graph` will be returned.
  `oper_name` is copied and has no lifetime requirements.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddReturnOutput(
      TF_ImportGraphDefOptions* opts, const char* oper_name, int index);
 
=head2 TF_ImportGraphDefOptionsNumReturnOutputs

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
  TF_CAPI_EXPORT extern int TF_ImportGraphDefOptionsNumReturnOperations(
      const TF_ImportGraphDefOptions* opts);
 
=head2 TF_ImportGraphDefResultsReturnOutputs
 
=over 2
 
  Fetches the return outputs requested via
  TF_ImportGraphDefOptionsAddReturnOutput(). The number of fetched outputs is
  returned in `num_outputs`. The array of return outputs is returned in
  `outputs`. `*outputs` is owned by and has the lifetime of `results`.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOutputs(
      TF_ImportGraphDefResults* results, int* num_outputs, TF_Output** outputs);
 
=head2 TF_ImportGraphDefResultsReturnOperations
 
=over 2
 
  Fetches the return operations requested via
  TF_ImportGraphDefOptionsAddReturnOperation(). The number of fetched
  operations is returned in `num_opers`. The array of return operations is
  returned in `opers`. `*opers` is owned by and has the lifetime of `results`.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOperations(
      TF_ImportGraphDefResults* results, int* num_opers, TF_Operation*** opers);
 
=head2 TF_ImportGraphDefResultsMissingUnusedInputMappings
 
=over 2
 
  Fetches any input mappings requested via
  TF_ImportGraphDefOptionsAddInputMapping() that didn't appear in the GraphDef
  and weren't used as input to any node in the imported graph def. The number
  of fetched mappings is returned in `num_missing_unused_input_mappings`. The
  array of each mapping's source node name is returned in `src_names`, and the
  array of each mapping's source index is returned in `src_indexes`.
   
  `*src_names`, `*src_indexes`, and the memory backing each string in
  `src_names` are owned by and have the lifetime of `results`.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsMissingUnusedInputMappings(
      TF_ImportGraphDefResults* results, int* num_missing_unused_input_mappings,
      const char*** src_names, int** src_indexes);
 
=head2 TF_DeleteImportGraphDefResults
 
=over 2
 
  Deletes a results object returned by TF_GraphImportGraphDefWithResults().
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteImportGraphDefResults(
      TF_ImportGraphDefResults* results);
 
=head2 TF_GraphImportGraphDefWithResults
 
=over 2
 
  Import the graph serialized in `graph_def` into `graph`.  Returns nullptr and
  a bad status on error. Otherwise, returns a populated
  TF_ImportGraphDefResults instance. The returned instance must be deleted via
  TF_DeleteImportGraphDefResults().

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
                                    TF_Status* status);
 
=head2 TF_GraphImportGraphDefWithReturnOutputs
 
=over 2
 
  Import the graph serialized in `graph_def` into `graph`.
  Convenience function for when only return outputs are needed.
   
  `num_return_outputs` must be the number of return outputs added (i.e. the
  result of TF_ImportGraphDefOptionsNumReturnOutputs()).  If
  `num_return_outputs` is non-zero, `return_outputs` must be of length
  `num_return_outputs`. Otherwise it can be null.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphImportGraphDefWithReturnOutputs(
      TF_Graph* graph, const TF_Buffer* graph_def,
      const TF_ImportGraphDefOptions* options, TF_Output* return_outputs,
      int num_return_outputs, TF_Status* status);
 
=head2 TF_GraphImportGraphDef
 
=over 2
 
  Import the graph serialized in `graph_def` into `graph`.
  Convenience function for when no results are needed.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphImportGraphDef(
      TF_Graph* graph, const TF_Buffer* graph_def,
      const TF_ImportGraphDefOptions* options, TF_Status* status);
 
=head2 TF_GraphCopyFunction

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_GraphNumFunctions(TF_Graph* g);
 
=head2 TF_GraphGetFunctions
 
=over 2
 
  Fills in `funcs` with the TF_Function* registered in `g`.
  `funcs` must point to an array of TF_Function* of length at least
  `max_func`. In usual usage, max_func should be set to the result of
  TF_GraphNumFunctions(g). In this case, all the functions registered in
  `g` will be returned. Else, an unspecified subset.
   
  If successful, returns the number of TF_Function* successfully set in
  `funcs` and sets status to OK. The caller takes ownership of
  all the returned TF_Functions. They must be deleted with TF_DeleteFunction.
  On error, returns 0, sets status to the encountered error, and the contents
  of funcs will be undefined.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FinishWhile(const TF_WhileParams* params,
                                            TF_Status* status,
                                            TF_Output* outputs);
 
=head2 TF_AbortWhile
 
=over 2
 
  Frees `params`s resources without building a while loop. `params` is no
  longer valid after this returns. Either this or TF_FinishWhile() must be
  called after a successful TF_NewWhile() call.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_AbortWhile(const TF_WhileParams* params);
 
=head2 TF_AddGradients
 
=over 2
 
  Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
  i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
   
  `dx` are used as initial gradients (which represent the symbolic partial
  derivatives of some loss function `L` w.r.t. `y`).
  `dx` must be nullptr or have size `ny`.
  If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
  shapes in `y`.
  The partial derivatives are returned in `dy`. `dy` should be allocated to
  size `nx`.
   
  Gradient nodes are automatically named under the "gradients/" prefix. To
  guarantee name uniqueness, subsequent calls to the same graph will
  append an incremental tag to the prefix: "gradients_1/", "gradients_2/", ...

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
=head2 TF_AddGradientsWithPrefix
 
=over 2
 
  Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
  i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
  This is a variant of TF_AddGradients that allows to caller to pass a custom
  name prefix to the operations added to a graph to compute the gradients.
   
  `dx` are used as initial gradients (which represent the symbolic partial
  derivatives of some loss function `L` w.r.t. `y`).
  `dx` must be nullptr or have size `ny`.
  If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
  shapes in `y`.
  The partial derivatives are returned in `dy`. `dy` should be allocated to
  size `nx`.
  `prefix` names the scope into which all gradients operations are being added.
  `prefix` must be unique within the provided graph otherwise this operation
  will fail. If `prefix` is nullptr, the default prefixing behaviour takes
  place, see TF_AddGradients for more details.

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
             array of operations is necessary to distinguish the case of
             creating a function with no body (e.g. identity or permutation)
             and the case of creating a function whose body contains all
             the nodes in the graph (except for the automatic skipping, see
             below).
 opers - Array of operations to become the body of the function or null.
         - If no array is given (`num_opers` = -1), all the
         operations in `fn_body` will become part of the function
         except operations referenced in `inputs`. These operations
         must have a single output (these operations are typically
         placeholders created for the sole purpose of representing
         an input. We can relax this constraint if there are
         compelling use cases).
         - If an array is given (`num_opers` >= 0), all operations
         in it will become part of the function. In particular, no
         automatic skipping of dummy input operations is performed.
 ninputs - number of elements in `inputs` array
 inputs - array of TF_Outputs that specify the inputs to the function.
          If `ninputs` is zero (the function takes no inputs), `inputs`
          can be null. The names used for function inputs are normalized
          names of the operations (usually placeholders) pointed to by
          `inputs`. These operation names should start with a letter.
          Normalization will convert all letters to lowercase and
          non-alphanumeric characters to '_' to make resulting names match
          the "[a-z][a-z0-9_]*" pattern for operation argument names.
          `inputs` cannot contain the same tensor twice.
 noutputs - number of elements in `outputs` array
 outputs - array of TF_Outputs that specify the outputs of the function.
           If `noutputs` is zero (the function returns no outputs), `outputs`
           can be null. `outputs` can contain the same tensor more than once.
 output_names - The names of the function's outputs. `output_names` array
                must either have the same length as `outputs`
                (i.e. `noutputs`) or be null. In the former case,
                the names should match the regular expression for ArgDef
                names - "[a-z][a-z0-9_]*". In the latter case,
                names for outputs will be generated automatically.
 opts - various options for the function, e.g. XLA's inlining control.
 description - optional human-readable description of this function.
 status - Set to OK on success and an appropriate error on failure.
 
Note that when the same TF_Output is listed as both an input and an output,
the corresponding function's output will equal to this input,
instead of the original node's output.
 
Callers must also satisfy the following constraints:
- `inputs` cannot refer to TF_Outputs within a control flow context. For
  example, one cannot use the output of "switch" node as input.
- `inputs` and `outputs` cannot have reference types. Reference types are
  not exposed through C API and are being replaced with Resources. We support
  reference types inside function's body to support legacy code. Do not
  use them in new code.
- Every node in the function's body must have all of its inputs (including

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_FunctionName(TF_Function* func);
 
=head2 TF_FunctionToFunctionDef
 
=over 2
 
  Write out a serialized representation of `func` (as a FunctionDef protocol
  message) to `output_func_def` (allocated by TF_NewBuffer()).
  `output_func_def`'s underlying buffer will be freed when TF_DeleteBuffer()
  is called.
   
  May fail on very large graphs in the future.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionToFunctionDef(TF_Function* func,
                                                      TF_Buffer* output_func_def,
                                                      TF_Status* status);
 
=head2 TF_FunctionImportFunctionDef
 
=over 2
 
  Construct and return the function whose FunctionDef representation is
  serialized in `proto`. `proto_len` must equal the number of bytes
  pointed to by `proto`.
  Returns:
   On success, a newly created TF_Function instance. It must be deleted by
   calling TF_DeleteFunction.
   
   On failure, null.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
  TF_CAPI_EXPORT extern TF_Function* TF_FunctionImportFunctionDef(
      const void* proto, size_t proto_len, TF_Status* status);
 
=head2 TF_FunctionSetAttrValueProto
 
=over 2
 
  Sets function attribute named `attr_name` to value stored in `proto`.
  If this attribute is already set to another value, it is overridden.
  `proto` should point to a sequence of bytes of length `proto_len`
  representing a binary serialization of an AttrValue protocol
  buffer.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionSetAttrValueProto(TF_Function* func,
                                                          const char* attr_name,
                                                          const void* proto,
                                                          size_t proto_len,
                                                          TF_Status* status);
 
=head2 TF_FunctionGetAttrValueProto
 
=over 2
 
  Sets `output_attr_value` to the binary-serialized AttrValue proto
  representation of the value of the `attr_name` attr of `func`.
  If `attr_name` attribute is not present, status is set to an error.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionGetAttrValueProto(
      TF_Function* func, const char* attr_name, TF_Buffer* output_attr_value,
      TF_Status* status);
 
=head2 TF_DeleteFunction

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
=head2 TF_TryEvaluateConstant
 
=over 2
 
  Attempts to evaluate `output`. This will only be possible if `output` doesn't
  depend on any graph inputs (this function is safe to call if this isn't the
  case though).
   
  If the evaluation is successful, this function returns true and `output`s
  value is returned in `result`. Otherwise returns false. An error status is
  returned if something is wrong with the graph or input. Note that this may
  return false even if no error status is set.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern unsigned char TF_TryEvaluateConstant(TF_Graph* graph,
                                                             TF_Output output,
                                                             TF_Tensor** result,
                                                             TF_Status* status);
 
=head2 TF_NewSession
 
=over 2
 
  Return a new execution session with the associated graph, or NULL on
  error. Does not take ownership of any input parameters.
   
  *`graph` must be a valid graph (not deleted or nullptr). `graph` will be

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Session* TF_NewSession(TF_Graph* graph,
                                                  const TF_SessionOptions* opts,
                                                  TF_Status* status);
 
=head2 TF_LoadSessionFromSavedModel
 
=over 2
 
  This function creates a new TF_Session (which is created on success) using
  `session_options`, and then initializes state (restoring tensors and other
  assets) using `run_options`.
   
  Any NULL and non-NULL value combinations for (`run_options, `meta_graph_def`)
  are valid.
   
  - `export_dir` must be set to the path of the exported SavedModel.
  - `tags` must include the set of tags used to identify one MetaGraphDef in
     the SavedModel.
  - `graph` must be a graph newly allocated with TF_NewGraph().
  

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_CloseSession(TF_Session*, TF_Status* status);
 
=head2 TF_DeleteSession
 
=over 2
 
  Destroy a session object.
   
  Even if error information is recorded in *status, this call discards all
  local resources associated with the session.  The session may not be used
  during or after this call (and the session drops its reference to the
  corresponding graph).
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSession(TF_Session*, TF_Status* status);
 
=head2 TF_SessionRun
 
=over 2
 
  Run the graph associated with the session starting with the supplied inputs
  (inputs[0,ninputs-1] with corresponding values in input_values[0,ninputs-1]).
   
  Any NULL and non-NULL value combinations for (`run_options`,
  `run_metadata`) are valid.
   
     - `run_options` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to a `TF_Buffer` containing the
       serialized representation of a `RunOptions` protocol buffer.
     - `run_metadata` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to an empty, freshly allocated
       `TF_Buffer` that may be updated to contain the serialized representation
       of a `RunMetadata` protocol buffer.
   
  The caller retains ownership of `input_values` (which can be deleted using
  TF_DeleteTensor). The caller also retains ownership of `run_options` and/or
  `run_metadata` (when not NULL) and should manually call TF_DeleteBuffer on
  them.
   
  On success, the tensors corresponding to outputs[0,noutputs-1] are placed in
  output_values[]. Ownership of the elements of output_values[] is transferred
  to the caller, which must eventually call TF_DeleteTensor on them.
   
  On failure, output_values[] contains NULLs.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SessionRun(
      TF_Session* session,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern uint64_t TF_DeviceListIncarnation(
      const TF_DeviceList* list, int index, TF_Status* status);
 
=head2 TF_LoadLibrary
 
=over 2
 
  Load the library specified by library_filename and register the ops and
  kernels present in that library.
   
  Pass "library_filename" to a platform-specific mechanism for dynamically
  loading a library. The rules for determining the exact location of the
  library are platform-specific and are not documented here.
   
  On success, place OK in status and return the newly created library handle.
  The caller owns the library handle.
   
  On failure, place an error status in status and return NULL.

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Library* TF_LoadLibrary(const char* library_filename,
                                                   TF_Status* status);
 
=head2 TF_GetOpList
 
=over 2
 
  Get the OpList of OpDefs defined in the library pointed by lib_handle.
   
  Returns a TF_Buffer. The memory pointed to by the result is owned by
  lib_handle. The data in the buffer will be the serialized OpList proto for
  ops defined in the library.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer TF_GetOpList(TF_Library* lib_handle);
 
=head2 TF_DeleteLibraryHandle

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteLibraryHandle(TF_Library* lib_handle);
 
=head2 TF_GetAllOpList
 
=over 2
 
  Get the OpList of all OpDefs defined in this address space.
  Returns a TF_Buffer, ownership of which is transferred to the caller
  (and can be freed using TF_DeleteBuffer).
   
  The data in the buffer will be the serialized OpList proto for ops registered
  in this address space.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_GetAllOpList(void);
 
=head2 TF_NewApiDefMap
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteApiDefMap(TF_ApiDefMap* apimap);
 
=head2 TF_ApiDefMapPut
 
=over 2
 
  Add ApiDefs to the map.
   
  `text` corresponds to a text representation of an ApiDefs protocol message.
   
  The provided ApiDefs will be merged with existing ones in the map, with
  precedence given to the newly added version in case of conflicts with
  previous calls to TF_ApiDefMapPut.
 
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ApiDefMapPut(TF_ApiDefMap* api_def_map,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
=back
 
  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_RegisterFilesystemPlugin(
      const char* plugin_filename, TF_Status* status);
 
=head2 TF_NewShape
 
=over 2
 
  Return a new, unknown rank shape object. The caller is responsible for
  calling TF_DeleteShape to deallocate and destroy the returned shape.
 
=back
 
  /* From <tensorflow/c/tf_shape.h> */
  TF_CAPI_EXPORT extern TF_Shape* TF_NewShape();
 
=head2 TF_ShapeDims
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
=back
 
  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern int64_t TF_TensorElementCount(const TF_Tensor* tensor);
 
=head2 TF_TensorBitcastFrom
 
=over 2
 
  Copy the internal data representation of `from` to `to`. `new_dims` and
  `num_new_dims` specify the new shape of the `to` tensor, `type` specifies its
  data type. On success, *status is set to TF_OK and the two tensors share the
  same data buffer.
   
  This call requires that the `from` tensor and the given type and shape (dims
  and num_dims) are "compatible" (i.e. they occupy the same number of bytes).
  Specifically, given from_type_size = TF_DataTypeSize(TF_TensorType(from)):
   
  ShapeElementCount(dims, num_dims) * TF_DataTypeSize(type)
   
  must equal
   
  TF_TensorElementCount(from) * from_type_size
   
  where TF_ShapeElementCount would be the number of elements in a tensor with
  the given shape.
   
  In addition, this function requires:
    * TF_DataTypeSize(TF_TensorType(from)) != 0
    * TF_DataTypeSize(type) != 0
   
  If any of the requirements are not met, *status is set to
  TF_INVALID_ARGUMENT.
 
=back
 
  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern void TF_TensorBitcastFrom(const TF_Tensor* from,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
=back
 
  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern void TF_StringDealloc(TF_TString *tstr);
 
=head2 TF_DataTypeSize
 
=over 2
 
  TF_DataTypeSize returns the sizeof() for the underlying type corresponding
  to the given TF_DataType enum value. Returns 0 for variable length types
  (eg. TF_STRING) or on failure.
 
=back
 
  /* From <tensorflow/c/tf_datatype.h> */
  TF_CAPI_EXPORT extern size_t TF_DataTypeSize(TF_DataType dt);
 
=head2 TF_NewOpDefinitionBuilder

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
  TF_CAPI_EXPORT extern void TF_DeleteOpDefinitionBuilder(
      TF_OpDefinitionBuilder* builder);
 
=head2 TF_OpDefinitionBuilderAddAttr
 
=over 2
 
  Adds an attr to the given TF_OpDefinitionBuilder. The spec has
  format "<name>:<type>" or "<name>:<type>=<default>"
  where <name> matches regexp [a-zA-Z][a-zA-Z0-9_]*.
  By convention, names containing only capital letters are reserved for
  attributes whose values can be inferred by the operator implementation if not
  supplied by the user. If the attribute name contains characters other than
  capital letters, the operator expects the user to provide the attribute value
  at operation runtime.
   
  <type> can be:
    "string", "int", "float", "bool", "type", "shape", or "tensor"
    "numbertype", "realnumbertype", "quantizedtype"
        (meaning "type" with a restriction on valid values)
    "{int32,int64}" or {realnumbertype,quantizedtype,string}"
        (meaning "type" with a restriction containing unions of value types)
    "{\"foo\", \"bar\n baz\"}", or "{'foo', 'bar\n baz'}"
        (meaning "string" with a restriction on valid values)
    "list(string)", ..., "list(tensor)", "list(numbertype)", ...
        (meaning lists of the above types)
    "int >= 2" (meaning "int" with a restriction on valid values)
    "list(string) >= 2", "list(int) >= 2"
        (meaning "list(string)" / "list(int)" with length at least 2)
  <default>, if included, should use the Proto text format
  of <type>.  For lists use [a, b, c] format.
   
  Note that any attr specifying the length of an input or output will
  get a default minimum of 1 unless the >= # syntax is used.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
  Sets the is_stateful property of the builder to the given value.
   
  The op built by this builder is stateful if its behavior depends on some
  state beyond its input tensors (e.g. variable reading op) or if it has a
  side-effect (e.g. printing or asserting ops). Equivalently, stateless ops
  must always produce the same output for the same input and have no
  side-effects.
   
  By default Ops may be moved between devices. Stateful ops should either not
  be moved, or should only be moved if that state can also be moved (e.g. via
  some sort of save / restore). Stateful ops are guaranteed to never be
  optimized away by Common Subexpression Elimination (CSE).
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetIsStateful(
      TF_OpDefinitionBuilder* builder, bool is_stateful);
 
=head2 TF_OpDefinitionBuilderSetAllowsUninitializedInput
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextNumInputs(
      TF_ShapeInferenceContext* ctx);
 
=head2 TF_NewShapeHandle
 
=over 2
 
  Returns a newly allocated shape handle. The shapes represented by these
  handles may be queried or mutated with the corresponding
  TF_ShapeInferenceContext...  functions.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_NewShapeHandle();
 
=head2 TF_ShapeInferenceContextGetInput
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextScalar(
      TF_ShapeInferenceContext* ctx);
 
=head2 TF_ShapeInferenceContextVectorFromSize
 
=over 2
 
  Returns a newly-allocate shape handle representing a vector of the given
  size. The returned handle should be freed with TF_DeleteShapeHandle.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextVectorFromSize(
      TF_ShapeInferenceContext* ctx, size_t size);
 
=head2 TF_NewDimensionHandle

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContext_GetAttrType(
      TF_ShapeInferenceContext* ctx, const char* attr_name, TF_DataType* val,
      TF_Status* status);
 
=head2 TF_ShapeInferenceContextRank
 
=over 2
 
  Returns the rank of the shape represented by the given handle.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextRank(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);
 
=head2 TF_ShapeInferenceContextRankKnown
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int TF_ShapeInferenceContextRankKnown(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);
 
=head2 TF_ShapeInferenceContextWithRank
 
=over 2
 
  If <handle> has rank <rank>, or its rank is unknown, return OK and return the
  shape with asserted rank in <*result>. Otherwise an error is placed into
  `status`.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRank(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);
 
=head2 TF_ShapeInferenceContextWithRankAtLeast
 
=over 2
 
  If <handle> has rank at least <rank>, or its rank is unknown, return OK and
  return the shape with asserted rank in <*result>. Otherwise an error is
  placed into `status`.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtLeast(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);
 
=head2 TF_ShapeInferenceContextWithRankAtMost
 
=over 2
 
  If <handle> has rank at most <rank>, or its rank is unknown, return OK and
  return the shape with asserted rank in <*result>. Otherwise an error is
  placed into `status`.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtMost(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);
 
=head2 TF_ShapeInferenceContextDim
 
=over 2
 
  Places a handle to the ith dimension of the given shape into *result.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextDim(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t i,
      TF_DimensionHandle* result);
 
=head2 TF_ShapeInferenceContextSubshape
 
=over 2
 
  Returns in <*result> a sub-shape of <shape_handle>, with dimensions
  [start:end]. <start> and <end> can be negative, to index from the end of the
  shape. <start> and <end> are set to the rank of <shape_handle> if > rank of
  <shape_handle>.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSubshape(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t start,
      int64_t end, TF_ShapeHandle* result, TF_Status* status);
 
=head2 TF_ShapeInferenceContextSetUnknownShape
 
=over 2
 
  Places an unknown shape in all outputs for the given inference context. Used
  for shape inference functions with ops whose output shapes are unknown.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSetUnknownShape(
      TF_ShapeInferenceContext* ctx, TF_Status* status);
 
=head2 TF_DimensionHandleValueKnown
 
=over 2
 
  Returns whether the given handle represents a known dimension.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int TF_DimensionHandleValueKnown(
      TF_DimensionHandle* dim_handle);
 
=head2 TF_DimensionHandleValue
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_DimensionHandleValue(
      TF_DimensionHandle* dim_handle);
 
=head2 TF_ShapeInferenceContextConcatenateShapes
 
=over 2
 
  Returns in <*result> the result of appending the dimensions of <second> to
  those of <first>.
 
=back
 
  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextConcatenateShapes(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* first,
      TF_ShapeHandle* second, TF_ShapeHandle* result, TF_Status* status);
 
=head2 TF_DeleteShapeHandle
 
=over 2
 
  Frees the given shape handle.
 
=back
 
  /* From <tensorflow/c/ops.h> */

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
  and deleting entries as they are encountered.
   
  If dirname itself is not readable or does not exist, *undeleted_dir_count is
  set to 1, *undeleted_file_count is set to 0 and an appropriate status (e.g.
  TF_NOT_FOUND) is returned.
   
  If dirname and all its descendants were successfully deleted, TF_OK is
  returned and both error counters are set to zero.
   
  Otherwise, while traversing the tree, undeleted_file_count and
  undeleted_dir_count are updated if an entry of the corresponding type could
  not be deleted. The returned error status represents the reason that any one
  of these entries could not be deleted.
   
  Typical status codes:
   * TF_OK - dirname exists and we were able to delete everything underneath
   * TF_NOT_FOUND - dirname doesn't exist
   * TF_PERMISSION_DENIED - dirname or some descendant is not writable
   * TF_UNIMPLEMENTED - some underlying functions (like Delete) are not
     implemented
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
  TF_CAPI_EXPORT extern void TF_FileStat(const char* filename,
                                         TF_FileStatistics* stats,
                                         TF_Status* status);
 
=head2 TF_NewWritableFile
 
=over 2
 
  Creates or truncates the given filename and returns a handle to be used for
  appending data to the file. If status is TF_OK, *handle is updated and the
  caller is responsible for freeing it (see TF_CloseWritableFile).
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_NewWritableFile(const char* filename,
                                                TF_WritableFileHandle** handle,
                                                TF_Status* status);
 
=head2 TF_CloseWritableFile

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_DeleteFile(const char* filename,
                                           TF_Status* status);
 
=head2 TF_StringStreamNext
 
=over 2
 
  Retrieves the next item from the given TF_StringStream and places a pointer
  to it in *result. If no more items are in the list, *result is set to NULL
  and false is returned.
   
  Ownership of the items retrieved with this function remains with the library.
  Item points are invalidated after a call to TF_StringStreamDone.
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern bool TF_StringStreamNext(TF_StringStream* list,
                                                 const char** result);
 
=head2 TF_StringStreamDone
 
=over 2
 
  Frees the resources associated with given string list. All pointers returned
  by TF_StringStreamNext are invalid after this call.
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_StringStreamDone(TF_StringStream* list);
 
=head2 TF_GetChildren
 
=over 2
 
  Retrieves the list of children of the given directory. You can iterate
  through the list with TF_StringStreamNext. The caller is responsible for
  freeing the list (see TF_StringStreamDone).
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_StringStream* TF_GetChildren(const char* filename,
                                                        TF_Status* status);
 
=head2 TF_GetLocalTempDirectories
 
=over 2
 
  Retrieves a list of directory names on the local machine that may be used for
  temporary storage. You can iterate through the list with TF_StringStreamNext.
  The caller is responsible for freeing the list (see TF_StringStreamDone).
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_StringStream* TF_GetLocalTempDirectories(void);
 
=head2 TF_GetTempFileName
 
=over 2
 
  Creates a temporary file name with an extension.
  The caller is responsible for freeing the returned pointer.
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern char* TF_GetTempFileName(const char* extension);
 
=head2 TF_NowNanos
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
=head2 TF_StartThread
 
=over 2
 
  Returns a new thread that is running work_func and is identified
  (for debugging/performance-analysis) by thread_name.
   
  The given param (which may be null) is passed to work_func when the thread
  starts. In this way, data may be passed from the thread back to the caller.
   
  Caller takes ownership of the result and must call TF_JoinThread on it
  eventually.
 
=back
 
  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_Thread* TF_StartThread(const TF_ThreadOptions* options,
                                                  const char* thread_name,
                                                  void (*work_func)(void*),
                                                  void* param);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
  to the computation.
   
  The TF_OpKernelContext pointer received by compute_func is owned by
  TensorFlow and will be deleted once compute_func returns. It must not be used
  after this.
   
  Finally, when TensorFlow no longer needs the kernel, it will call
  delete_func if one is provided. This function will receive the pointer
  returned in `create_func` or nullptr if no `create_func` was provided.
   
  The caller should pass the result of this function to
  TF_RegisterKernelBuilder, which will take ownership of the pointer. If, for
  some reason, the kernel builder will not be registered, the caller should
  delete it with TF_DeleteKernelBuilder.
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_KernelBuilder* TF_NewKernelBuilder(
      const char* op_name, const char* device_name,
      void* (*create_func)(TF_OpKernelConstruction*),

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_TypeConstraint(
      TF_KernelBuilder* kernel_builder, const char* attr_name,
      const TF_DataType type, TF_Status* status);
 
=head2 TF_KernelBuilder_HostMemory
 
=over 2
 
  Specify that this kernel requires/provides an input/output arg
  in host memory (instead of the default, device memory).
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_HostMemory(
      TF_KernelBuilder* kernel_builder, const char* arg_name);
 
=head2 TF_KernelBuilder_Priority

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_GetInput(TF_OpKernelContext* ctx, int i,
                                         TF_Tensor** tensor, TF_Status* status);
 
=head2 TF_InputRange
 
=over 2
 
  Retrieves the start and stop indices, given the input name. Equivalent to
  OpKernel::InputRange(). `args` will contain the result indices and status.
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_InputRange(TF_OpKernelContext* ctx,
                                           const char* name,
                                           TF_InputRange_Args* args);
 
=head2 TF_SetOutput

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_GetOpKernelName(TF_OpKernelContext* ctx);
 
=head2 TF_GetResourceMgrDefaultContainerName
 
=over 2
 
  Returns the default container of the resource manager in OpKernelContext.
   
  The returned TF_StringView's underlying string is owned by the OpKernel and
  has the same lifetime as the OpKernel.
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_GetResourceMgrDefaultContainerName(
      TF_OpKernelContext* ctx);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Bool* vals,
      int max_vals, TF_Status* status);
 
=head2 TF_OpKernelConstruction_GetAttrStringList
 
=over 2
 
  Interprets the named kernel construction attribute as string array and fills
  in `vals` and `lengths`, each of which must point to an array of length at
  least `max_values`. *status is set to TF_OK. The elements of values will
  point to addresses in `storage` which must be at least `storage_size` bytes
  in length. Ideally, max_values would be set to list_size and `storage` would
  be at least total_size, obtained from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size).
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrStringList(
      TF_OpKernelConstruction* ctx, const char* attr_name, char** vals,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Tensor** vals,
      int max_values, TF_Status* status);
 
=head2 TF_OpKernelConstruction_GetAttrFunction
 
=over 2
 
  Interprets the named kernel construction attribute as a
  tensorflow::NameAttrList and returns the serialized proto as TF_Buffer.
  `status` will be set. The caller takes ownership of the returned TF_Buffer
  (if not null) and is responsible for managing its lifetime.
 
=back
 
  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_OpKernelConstruction_GetAttrFunction(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Status* status);
 
=head2 TF_OpKernelConstruction_HasAttr
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
      int num_dims, TF_AllocatorAttributes* alloc_attrs, TF_Status* status);
 
=head2 TF_AssignVariable
 
=over 2
 
  Expose higher level Assignment operation for Pluggable vendors to implement
  in the plugin for Training. The API takes in the context with indices for
  the input and value tensors. It also accepts the copy callback provided by
  pluggable vendor to do the copying of the tensors. The caller takes ownership
  of the `source` and `dest` tensors and is responsible for freeing them with
  TF_DeleteTensor. This function will return an error when the following
  conditions are met:
    1. `validate_shape` is set to `true`
    2. The variable is initialized
    3. The shape of the value tensor doesn't match the shape of the variable
       tensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
      TF_Status* status);
 
=head2 TF_AssignRefVariable
 
=over 2
 
  Expose higher level Assignment operation for Pluggable vendors to implement
  in the plugin for Training on ref variables. The API takes in the context
  with indices for the input and value tensors. It also accepts the copy
  callback provided by pluggable vendor to do the copying of the tensors. The
  caller takes ownership of the `source` and `dest` tensors and is responsible
  for freeing them with TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AssignRefVariable(
      TF_OpKernelContext* ctx, int input_ref_index, int output_ref_index,
      int value_index, bool use_locking, bool validate_shape,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
=head2 TF_AssignUpdateVariable
 
=over 2
 
  Expose higher level AssignUpdate operation for Pluggable vendors to implement
  in the plugin for Training. The API takes in the context with indices for the
  input and value tensors. It also accepts the copy callback provided by
  pluggable vendor to do the copying of the tensors and the update callback to
  apply the arithmetic operation. The caller takes ownership of the `source`,
  `dest`, `tensor` and `value` tensors and is responsible for freeing them with
  TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AssignUpdateVariable(
      TF_OpKernelContext* ctx, int input_index, int value_index, int Op,
      int isVariantType,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      void (*updateFunc)(TF_OpKernelContext* ctx, TF_Tensor* tensor,
                         TF_Tensor* value, int Op),
      TF_Status* status);
 
=head2 TF_MaybeLockVariableInputMutexesInOrder
 
=over 2
 
  This is a helper function which acquires mutexes in-order to provide
  thread-safe way of performing weights update during the optimizer op. It
  returns an opaque LockHolder handle back to plugin. This handle is passed to
  the Release API for releasing the locks when the weight update is done. The
  caller takes ownership of the `source` and `dest` tensors and is responsible
  for freeing them with TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_MaybeLockVariableInputMutexesInOrder(
      TF_OpKernelContext* ctx, bool do_lock, bool sparse, const int* const inputs,
      size_t len,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_VariableInputLockHolder** lockHolder, TF_Status* status);
 
=head2 TF_GetInputTensorFromVariable
 
=over 2
 
  This interface returns `out` tensor which is updated corresponding to the
  variable passed with input index. The caller takes ownership of the `source`
  and `dest` tensors and is responsible for freeing them with TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_GetInputTensorFromVariable(
      TF_OpKernelContext* ctx, int input, bool lock_held, bool isVariantType,
      bool sparse,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_Tensor** out, TF_Status* status);
 
=head2 TF_OpKernelContext_ForwardRefInputToRefOutput
 
=over 2
 
  This interface forwards the reference from input to the output tensors
  corresponding to the indices provided with `input_index` and `output_index`
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelContext_ForwardRefInputToRefOutput(
      TF_OpKernelContext* ctx, int32_t input_index, int32_t output_index);
 
=head2 TF_ReleaseVariableInputLockHolder
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
                                           TF_Status* status);
 
=head2 TF_AddNVariant
 
=over 2
 
  Expose higher level AddN operation for Pluggable vendors to implement
  in the plugin for Variant data types. The API takes in the context and a
  callback provided by pluggable vendor to do a Binary Add operation on the
  tensors unwrapped from the Variant tensors. The caller takes ownership of the
  `a`, `b` and `out` tensors and is responsible for freeing them with
  TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AddNVariant(
      TF_OpKernelContext* ctx,
      void (*binary_add_func)(TF_OpKernelContext* ctx, TF_Tensor* a, TF_Tensor* b,
                              TF_Tensor* out),
      TF_Status* status);
 
=head2 TF_ZerosLikeVariant
 
=over 2
 
  Expose higher level ZerosLike operation for Pluggable vendors to implement
  in the plugin for Variant data types. The API takes in the context and a
  callback provided by pluggable vendor to do a ZerosLike operation on the
  tensors unwrapped from the Variant tensors. The caller takes ownership of the
  `input` and `out` tensors and is responsible for freeing them with
  TF_DeleteTensor.
 
=back
 
  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ZerosLikeVariant(
      TF_OpKernelContext* ctx,
      void (*zeros_like_func)(TF_OpKernelContext* ctx, TF_Tensor* input,
                              TF_Tensor* out),
      TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_DeviceList* TFE_ContextListDevices(TFE_Context* ctx,
                                                              TF_Status* status);
 
=head2 TFE_ContextClearCaches
 
=over 2
 
  Clears the internal caches in the TFE context. Useful when reseeding random
  ops.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextClearCaches(TFE_Context* ctx);
 
=head2 TFE_ContextSetThreadLocalDevicePlacementPolicy
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_TensorHandleDeviceName(
      TFE_TensorHandle* h, TF_Status* status);
 
=head2 TFE_TensorHandleBackingDeviceName
 
=over 2
 
  Returns the name of the device in whose memory `h` resides.
   
  This function will block till the operation that produces `h` has completed.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_TensorHandleBackingDeviceName(
      TFE_TensorHandle* h, TF_Status* status);
 
=head2 TFE_TensorHandleCopySharingTensor
 
=over 2
 
  Return a pointer to a new TFE_TensorHandle that shares the underlying tensor
  with `h`. On success, `status` is set to OK. On failure, `status` reflects
  the error and a nullptr is returned.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopySharingTensor(
      TFE_TensorHandle* h, TF_Status* status);
 
=head2 TFE_TensorHandleResolve

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
  TF_CAPI_EXPORT extern TF_Tensor* TFE_TensorHandleResolve(TFE_TensorHandle* h,
                                                           TF_Status* status);
 
=head2 TFE_TensorHandleCopyToDevice
 
=over 2
 
  Create a new TFE_TensorHandle with the same contents as 'h' but placed
  in the memory of the device name 'device_name'.
  If source and destination are the same device, then this creates a new handle
  that shares the underlying buffer. Otherwise, it currently requires at least
  one of the source or destination devices to be CPU (i.e., for the source or
  destination tensor to be placed in host memory).
  If async execution is enabled, the copy may be enqueued and the call will
  return "non-ready" handle. Else, this function returns after the copy has
  been done.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopyToDevice(
      TFE_TensorHandle* h, TFE_Context* ctx, const char* device_name,
      TF_Status* status);
 
=head2 TFE_TensorHandleTensorDebugInfo
 
=over 2
 
  Retrieves TFE_TensorDebugInfo for `handle`.
  If TFE_TensorHandleTensorDebugInfo succeeds, `status` is set to OK and caller
  is responsible for deleting returned TFE_TensorDebugInfo.
  If TFE_TensorHandleTensorDebugInfo fails, `status` is set to appropriate
  error and nullptr is returned. This function can block till the operation
  that produces `handle` has completed.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorDebugInfo* TFE_TensorHandleTensorDebugInfo(
      TFE_TensorHandle* h, TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteTensorDebugInfo(
      TFE_TensorDebugInfo* debug_info);
 
=head2 TFE_TensorDebugInfoOnDeviceNumDims
 
=over 2
 
  Returns the number of dimensions used to represent the tensor on its device.
  The number of dimensions used to represent the tensor on device can be
  different from the number returned by TFE_TensorHandleNumDims.
  The return value was current at the time of TFE_TensorDebugInfo creation.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_TensorDebugInfoOnDeviceNumDims(
      TFE_TensorDebugInfo* debug_info);
 
=head2 TFE_TensorDebugInfoOnDeviceDim
 
=over 2
 
  Returns the number of elements in dimension `dim_index`.
  Tensor representation on device can be transposed from its representation
  on host. The data contained in dimension `dim_index` on device
  can correspond to the data contained in another dimension in on-host
  representation. The dimensions are indexed using the standard TensorFlow
  major-to-minor order (slowest varying dimension first),
  not the XLA's minor-to-major order.
  On-device dimensions can be padded. TFE_TensorDebugInfoOnDeviceDim returns
  the number of elements in a dimension after padding.
  The return value was current at the time of TFE_TensorDebugInfo creation.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int64_t TFE_TensorDebugInfoOnDeviceDim(

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrType(TFE_Op* op, const char* attr_name,
                                               TF_DataType value);
 
=head2 TFE_OpSetAttrShape
 
=over 2
 
  If the number of dimensions is unknown, `num_dims` must be set to
  -1 and `dims` can be null.  If a dimension is unknown, the
  corresponding entry in the `dims` array must be -1.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrShape(TFE_Op* op, const char* attr_name,
                                                const int64_t* dims,
                                                const int num_dims,
                                                TF_Status* out_status);
 
=head2 TFE_OpSetAttrFunction

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextExportRunMetadata(TFE_Context* ctx,
                                                          TF_Buffer* buf,
                                                          TF_Status* status);
 
=head2 TFE_ContextStartStep
 
=over 2
 
  Some TF ops need a step container to be set to limit the lifetime of some
  resources (mostly TensorArray and Stack, used in while loop gradients in
  graph mode). Calling this on a context tells it to start a step.
 
=back
 
  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextStartStep(TFE_Context* ctx);
 
=head2 TFE_ContextEndStep
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
=back
 
  /* From <tensorflow/c/eager/dlpack.h> */
  TF_CAPI_EXPORT extern void TFE_CallDLManagedTensorDeleter(void* dlm_ptr);
 
=head2 TFE_OpReset
 
=over 2
 
  Resets `op_to_reset` with `op_or_function_name` and `raw_device_name`. This
  is for performance optimization by reusing an exiting unused op rather than
  creating a new op every time. If `raw_device_name` is `NULL` or empty, it
  does not set the device name. If it's not `NULL`, then it attempts to parse
  and set the device name. It's effectively `TFE_OpSetDevice`, but it is faster
  than separately calling it because if the existing op has the same
  `raw_device_name`, it skips parsing and just leave as it is.
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpReset(TFE_Op* op_to_reset,
                                         const char* op_or_function_name,
                                         const char* raw_device_name,
                                         TF_Status* status);
 
=head2 TFE_ContextEnableGraphCollection
 
=over 2
 
  Enables only graph collection in RunMetadata on the functions executed from
  this context.

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextAsyncWait(TFE_Context* ctx,
                                                  TF_Status* status);
 
=head2 TFE_TensorHandleDevicePointer
 
=over 2
 
  This function will block till the operation that produces `h` has
  completed. This is only valid on local TFE_TensorHandles. The pointer
  returned will be on the device in which the TFE_TensorHandle resides (so e.g.
  for a GPU tensor this will return a pointer to GPU memory). The pointer is
  only guaranteed to be valid until TFE_DeleteTensorHandle is called on this
  TensorHandle. Only supports POD data types.
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void* TFE_TensorHandleDevicePointer(TFE_TensorHandle*,
                                                            TF_Status*);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern size_t TFE_TensorHandleDeviceMemorySize(TFE_TensorHandle*,
                                                                TF_Status*);
 
=head2 TFE_NewTensorHandleFromDeviceMemory
 
=over 2
 
  Creates a new TensorHandle from memory residing in the physical device
  device_name. Takes ownership of the memory, and will call deleter to release
  it after TF no longer needs it or in case of error.
   
  Custom devices must use TFE_NewCustomDeviceTensorHandle instead.
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandleFromDeviceMemory(
      TFE_Context* ctx, const char* device_name, TF_DataType, const int64_t* dims,
      int num_dims, void* data, size_t len,
      void (*deallocator)(void* data, size_t len, void* arg),
      void* deallocator_arg, TF_Status* status);
 
=head2 TFE_HostAddressSpace
 
=over 2
 
  Retrieves the address space (i.e. job, replia, task) of the local host and
  saves it in the buffer.
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_HostAddressSpace(TFE_Context* ctx,
                                                  TF_Buffer* buf);
 
=head2 TFE_OpGetAttrs
 
=over 2
 
  Fetch a reference to `op`'s attributes. The returned reference is only valid
  while `op` is alive.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_IsCustomDevice(TFE_Context* ctx,
                                                const char* device_name);
 
=head2 TFE_NewCustomDeviceTensorHandle
 
=over 2
 
  Creates a new TensorHandle from memory residing in a custom device. Takes
  ownership of the memory pointed to by `tensor_handle_data`, and calls
  `methods.deallocator` to release it after TF no longer needs it or in case of
  an error.
   
  This call is similar to `TFE_NewTensorHandleFromDeviceMemory`, but supports
  custom devices instead of physical devices and does not require blocking
  waiting for exact shapes.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
                                                      const char* key,
                                                      const char* value,
                                                      TF_Status* status);
 
=head2 TFE_GetConfigKeyValue
 
=over 2
 
  Get configuration key and value using coordination service.
  The config key must be set before getting its value. Getting value of
  non-existing config keys will result in errors.
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_GetConfigKeyValue(TFE_Context* ctx,
                                                   const char* key,
                                                   TF_Buffer* value_buf,
                                                   TF_Status* status);
 
=head2 TFE_DeleteConfigKeyValue

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
=over 2
 
=back
 
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_WaitAtBarrier(TFE_Context* ctx,
                                               const char* barrier_id,
                                               int64_t barrier_timeout_in_ms,
                                               TF_Status* status);
 
=head2 TF_GetNodesToPreserveListSize
 
=over 2
 
  Get a set of node names that must be preserved. They can not be transformed
  or removed during the graph transformation. This includes feed and fetch
  nodes, keep_ops, init_ops. Fills in `num_values` and `storage_size`, they
  will be used in `TF_GetNodesToPreserveList`.
 
=back
 
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetNodesToPreserveListSize(
      const TF_GrapplerItem* item, int* num_values, size_t* storage_size,
      TF_Status* status);
 
=head2 TF_GetNodesToPreserveList
 
=over 2
 
  Get a set of node names that must be preserved. They can not be transformed
  or removed during the graph transformation. This includes feed and fetch
  nodes, keep_ops, init_ops. Fills in `values` and `lengths`, each of which
  must point to an array of length at least `num_values`.
   
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  `num_values` and `storage` can be
  obtained from TF_GetNodesToPreserveSize
   
  Fails if storage_size is too small to hold the requested number of strings.
 
=back
 
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetNodesToPreserveList(
      const TF_GrapplerItem* item, char** values, size_t* lengths, int num_values,
      void* storage, size_t storage_size, TF_Status* status);
 
=head2 TF_GetFetchNodesListSize
 
=over 2
 
  Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
  they will be used in `TF_GetFetchNodesList`

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
                                                      size_t* storage_size,
                                                      TF_Status* status);
 
=head2 TF_GetFetchNodesList
 
=over 2
 
  Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
  each of which must point to an array of length at least `num_values`.
   
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  `num_values` and `storage` can be
  obtained from TF_GetFetchNodesSize
   
  Fails if storage_size is too small to hold the requested number of strings.
 
=back
 
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetFetchNodesList(const TF_GrapplerItem* item,
                                                  char** values, size_t* lengths,

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
      TF_GraphProperties* graph_properties);
 
=head2 TF_InferStatically
 
=over 2
 
  Infer tensor shapes through abstract interpretation.
  If assume_valid_feeds is true, it can help infer shapes in the fanout of fed
  nodes. This may cause incorrectness in graph analyses, but is useful for
  simulation or scheduling.
  If aggressive_shape_inference is true, nodes are executed on the host to
  identify output values when possible and does other aggressive strategies.
  This may cause incorrectness in graph analyses, but is useful for simulation
  or scheduling.
  If include_input_tensor_values is true, the values of constant
  tensors will included in the input properties.
  If include_output_tensor_values is true, the values of constant tensors will
  be included in the output properties.
 
=back
 
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_InferStatically(
      TF_GraphProperties* graph_properties, TF_Bool assume_valid_feeds,
      TF_Bool aggressive_shape_inference, TF_Bool include_input_tensor_values,
      TF_Bool include_output_tensor_values, TF_Status* s);
 
=head2 TF_GetInputPropertiesListSize
 
=over 2
 
  Get the size of input OpInfo::TensorProperties given node name.
 
=back

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_DeleteFunctionLibraryDefinition(
      TF_FunctionLibraryDefinition* fn_lib);
 
=head2 TF_LookUpOpDef
 
=over 2
 
  Shorthand for calling LookUp to get the OpDef from FunctionLibraryDefinition
  given op name. The returned OpDef is represented by TF_Buffer.
 
=back
 
  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_LookUpOpDef(TF_FunctionLibraryDefinition* fn_lib,
                                            const char* name, TF_Buffer* buf,
                                            TF_Status* s);
 
=head2 TF_TensorSpecDataType

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern TF_SavedModel* TF_LoadSavedModelWithTags(
      const char* dirname, TFE_Context* ctx, const char* const* tags,
      int tags_len, TF_Status* status);
 
=head2 TF_DeleteSavedModel
 
=over 2
 
  Deletes a TF_SavedModel, and frees any resources owned by it.
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSavedModel(TF_SavedModel* model);
 
=head2 TF_GetSavedModelConcreteFunction
 
=over 2

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
  TF_CAPI_EXPORT extern TF_FunctionMetadata* TF_ConcreteFunctionGetMetadata(
      TF_ConcreteFunction* func);
 
=head2 TF_ConcreteFunctionMakeCallOp
 
=over 2
 
  Returns a TFE_Op suitable for executing this function. Caller must provide
  all function inputs in `inputs`, and must not add any additional inputs on
  the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
  The caller is responsible for deleting the returned TFE_Op. If op
  construction fails, `status` will be non-OK and the returned pointer will be
  null.
  TODO(bmzhao): Remove this function in a subsequent change; Design + implement
  a Function Execution interface for ConcreteFunction that accepts a tagged
  union of types (tensorflow::Value). This effectively requires moving much of
  the implementation of function.py/def_function.py to C++, and exposing a
  high-level API here. A strawman for what this interface could look like:
  TF_Value* TF_ExecuteFunction(TFE_Context*, TF_ConcreteFunction*, TF_Value*
  inputs, int num_inputs, TF_Status* status);
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/concrete_function.h> */
  TF_CAPI_EXPORT extern TFE_Op* TF_ConcreteFunctionMakeCallOp(
      TF_ConcreteFunction* func, TFE_TensorHandle** inputs, int num_inputs,
      TF_Status* status);
 
=head2 TF_SignatureDefParamName
 
=over 2
 
  Returns the name of the given parameter. The caller is not responsible for
  freeing the returned char*.
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param.h> */
  TF_CAPI_EXPORT extern const char* TF_SignatureDefParamName(
      const TF_SignatureDefParam* param);
 
=head2 TF_SignatureDefParamTensorSpec

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
  TF_CAPI_EXPORT extern TF_SignatureDefFunctionMetadata*
  TF_SignatureDefFunctionGetMetadata(TF_SignatureDefFunction* func);
 
=head2 TF_SignatureDefFunctionMakeCallOp
 
=over 2
 
  Returns a TFE_Op suitable for executing this function. Caller must provide
  all function inputs in `inputs`, and must not add any additional inputs on
  the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
  The caller is responsible for deleting the returned TFE_Op. If op
  construction fails, `status` will be non-OK and the returned pointer will be
  null.
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function.h> */
  TF_CAPI_EXPORT extern TFE_Op* TF_SignatureDefFunctionMakeCallOp(
      TF_SignatureDefFunction* func, TFE_TensorHandle** inputs, int num_inputs,
      TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param_list.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParam* TF_SignatureDefParamListGet(
      const TF_SignatureDefParamList* list, int i);
 
=head2 TF_SignatureDefFunctionMetadataArgs
 
=over 2
 
  Retrieves the arguments of the SignatureDefFunction. The caller is not
  responsible for freeing the returned pointer.
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
  TF_SignatureDefFunctionMetadataArgs(
      const TF_SignatureDefFunctionMetadata* list);
 
=head2 TF_SignatureDefFunctionMetadataReturns
 
=over 2
 
  Retrieves the returns of the SignatureDefFunction. The caller is not
  responsible for freeing the returned pointer.
 
=back
 
  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
  TF_SignatureDefFunctionMetadataReturns(
      const TF_SignatureDefFunctionMetadata* list);
 
=head2 TF_EnableXLACompilation

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
  TF_CAPI_EXPORT extern char* TF_FunctionDebugString(TF_Function* func,
                                                     size_t* len);
 
=head2 TF_DequeueNamedTensor
 
=over 2
 
  Caller must call TF_DeleteTensor() over the returned tensor. If the queue is
  empty, this call is blocked.
   
  Tensors are enqueued via the corresponding TF enqueue op.
  TODO(hongm): Add support for `timeout_ms`.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_DequeueNamedTensor(TF_Session* session,
                                                         int tensor_id,
                                                         TF_Status* status);
 
=head2 TF_EnqueueNamedTensor

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
  On success, enqueues `tensor` into a TF-managed FifoQueue given by
  `tensor_id`, associated with `session`. There must be a graph node named
  "fifo_queue_enqueue_<tensor_id>", to be executed by this API call. It reads
  from a placeholder node "arg_tensor_enqueue_<tensor_id>".
   
  `tensor` is still owned by the caller. This call will be blocked if the queue
  has reached its capacity, and will be unblocked when the queued tensors again
  drop below the capacity due to dequeuing.
   
  Tensors are dequeued via the corresponding TF dequeue op.
  TODO(hongm): Add support for `timeout_ms`.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_EnqueueNamedTensor(TF_Session* session,
                                                   int tensor_id,
                                                   TF_Tensor* tensor,
                                                   TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AttrBuilderCheckCanRunOnDevice(
      TF_AttrBuilder* builder, const char* device_type, TF_Status* status);
 
=head2 TF_GetNumberAttrForOpListInput
 
=over 2
 
  For argument number input_index, fetch the corresponding number_attr that
  needs to be updated with the argument length of the input list.
  Returns nullptr if there is any problem like op_name is not found, or the
  argument does not support this attribute type.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const char* TF_GetNumberAttrForOpListInput(
      const char* op_name, int input_index, TF_Status* status);

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
  TF_CAPI_EXPORT extern void TFE_EnableCollectiveOps(TFE_Context* ctx,
                                                     const void* proto,
                                                     size_t proto_len,
                                                     TF_Status* status);
 
=head2 TFE_AbortCollectiveOps
 
=over 2
 
  Aborts all ongoing collectives with the specified status. After abortion,
  subsequent collectives will error with this status immediately. To reset the
  collectives, create a new EagerContext.
   
  This is intended to be used when a peer failure is detected.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_AbortCollectiveOps(TFE_Context* ctx,
                                                    TF_Status* status);
 
=head2 TFE_CollectiveOpsCheckPeerHealth
 
=over 2
 
  Checks the health of collective ops peers. Explicit health check is needed in
  multi worker collective ops to detect failures in the cluster.  If a peer is
  down, collective ops may hang.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_CollectiveOpsCheckPeerHealth(
      TFE_Context* ctx, const char* task, int64_t timeout_in_ms,
      TF_Status* status);
 
=head2 TF_NewShapeAndTypeList

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN

8475
8476
8477
8478
8479
8480
8481
8482
8483
8484
8485
8486
8487
8488
8489
8490
8491
8492
8493
8494
8495
8496
8497
8498
8499
8500
8501
8502
8503
8504
8505
8506
8507
8508
8509
8510
8511
8512
8513
8514
8515
8516
8517
8518
8519
8520
8521
8522
8523
8524
8525
  Infer shapes for the given `op`. The arguments mimic the arguments of the
  `shape_inference::InferenceContext` constructor. Note the following:
    - The inputs of the `op` are not used for shape inference. So, it is
      OK to not have the inputs properly set in `op`. See `input_tensors`
      if you want shape inference to consider the input tensors of the
      op for shape inference.
    - The types need not be set in `input_shapes` as it is not used.
    - The number of `input_tensors` should be the same as the number of items
      in `input_shapes`.
   
  The results are returned in `output_shapes` and
  `output_resource_shapes_and_types`. The caller is responsible for freeing the
  memory in these buffers by calling `TF_DeleteShapeAndTypeList`.
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_InferShapes(
      TFE_Op* op, TF_ShapeAndTypeList* input_shapes, TF_Tensor** input_tensors,
      TF_ShapeAndTypeList* input_tensor_as_shapes,
      TF_ShapeAndTypeList** input_resource_shapes_and_types,
      TF_ShapeAndTypeList** output_shapes,
      TF_ShapeAndTypeList*** output_resource_shapes_and_types, TF_Status* status);
 
=head2 TF_ImportGraphDefOptionsSetValidateColocationConstraints
 
=over 2
 
=back
 
  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void
  TF_ImportGraphDefOptionsSetValidateColocationConstraints(
      TF_ImportGraphDefOptions* opts, unsigned char enable);
 
=head2 TF_LoadPluggableDeviceLibrary
 
=over 2
 
  Load the library specified by library_filename and register the pluggable
  device and related kernels present in that library. This function is not
  supported on embedded on mobile and embedded platforms and will fail if
  called.
   
  Pass "library_filename" to a platform-specific mechanism for dynamically
  loading a library. The rules for determining the exact location of the
  library are platform-specific and are not documented here.
   
  On success, returns the newly created library handle and places OK in status.
  The caller owns the library handle.
  

lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod  view on Meta::CPAN

27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
An alternative to installing all the software listed on the "bare metal" host
machine is to use C<libtensorflow> via a Docker container and the
NVIDIA Container Toolkit. See L<AI::TensorFlow::Libtensorflow::Manual::Quickstart/DOCKER IMAGES>
for more information.
 
=head1 RUNTIME
 
When running C<libtensorflow>, your program will attempt to acquire quite a bit
of GPU VRAM. You can check if you have enough free VRAM by using the
C<nvidia-smi> command which displays resource information as well as which
processes are currently using the GPU.  If C<libtensorflow> is not able to
allocate enough memory, it will crash with an out-of-memory (OOM) error. This
is typical when running multiple programs that both use the GPU.
 
If you have multiple GPUs, you can control which GPUs your program can access
by using the
provided by the underlying CUDA library. This is typically
done by setting the variable in a C<BEGIN> block before loading
L<AI::TensorFlow::Libtensorflow>:

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
        image_size => [ 512, 512 ],
    },
);
 
my $model_name = 'centernet_hourglass_512x512';
 
say "Selected model: $model_name : $model_name_to_params{$model_name}{handle}";
 
my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
$model_uri->query_form( 'tf-hub-format' => 'compressed' );
my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
my $model_archive_path = "${model_base}.tar.gz";
 
my $http = HTTP::Tiny->new;
 
for my $download ( [ $model_uri  => $model_archive_path ],) {
    my ($uri, $path) = @$download;
    say "Downloading $uri to $path";
    next if -e $path;
    $http->mirror( $uri, $path );
}
 
my $ae = Archive::Extract->new( archive => $model_archive_path );
die "Could not extract archive" unless $ae->extract( to => $model_base );
 
my $saved_model = path($model_base)->child('saved_model.pb');
say "Saved model is in $saved_model" if -f $saved_model;
 
# Get the labels
 
my %labels_map = $response->{content} =~ m<
(?:item \s+ \{  \s+
  \Qname:\E \s+ "[^"]+" \s+
  \Qid:\E   \s+ (\d+) \s+
  \Qdisplay_name:\E \s+ "([^"]+)" \s+
})+
>sgx;
 
my $label_count = List::Util::max keys %labels_map;
 
say "We have a label count of $label_count. These labels include: ",

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
        op   =>  $graph->OperationByName('serving_default_input_tensor'),
        dict => {
            input_tensor => 0,
        }
    },
    out => {
        op => $graph->OperationByName('StatefulPartitionedCall'),
        dict => {
            detection_boxes   => 0,
            detection_classes => 1,
            detection_scores  => 2,
            num_detections    => 3,
        }
    },
);
 
my %outputs;
 
%outputs = map {
    my $put_type = $_;
    my $op = $ops{$put_type}{op};

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
            });
        } keys %$port_dict
     }
} keys %ops;
 
p %outputs;
 
 
my %images_for_test_to_uri = (
);
 
my @image_names = sort keys %images_for_test_to_uri;
my $h = HTML::Tiny->new;
 
my $image_name = 'beach_scene';
if( IN_IPERL ) {
    IPerl->html(
        $h->a( { href => $images_for_test_to_uri{$image_name} },
            $h->img({

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
                width => '100%',
            })
        ),
    );
}
 
sub load_image_to_pdl {
    my ($uri, $image_size) = @_;
 
    my $http = HTTP::Tiny->new;
    my $response = $http->get( $uri );
    die "Could not fetch image from $uri" unless $response->{success};
    say "Downloaded $uri";
 
    my $img = Imager->new;
    $img->read( data => $response->{content} );
 
    # Create PDL ndarray from Imager data in-memory.
    my $data;
    $img->write( data => \$data, type => 'raw' )
        or die "could not write ". $img->errstr;
 
    die "Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
        if $img->getchannels != 3;
 
    # $data is packed as PDL->dims == [w,h] with RGB pixels

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
undef;
 
my $tftensor_output_by_name = $RunSession->($session, $t);
 
my %pdl_output_by_name = map {
    $_ => FloatTFTensorToPDL( $tftensor_output_by_name->{$_} )
} keys $tftensor_output_by_name->%*;
 
undef;
 
my $min_score_thresh = 0.30;
 
my $which_detect = which( $pdl_output_by_name{detection_scores} > $min_score_thresh );
 
my %subset;
 
$subset{detection_boxes}   = $pdl_output_by_name{detection_boxes}->dice('X', $which_detect);
$subset{detection_classes} = $pdl_output_by_name{detection_classes}->dice($which_detect);
$subset{detection_scores}  = $pdl_output_by_name{detection_scores}->dice($which_detect);
 
$subset{detection_class_labels}->@* = map { $labels_map{$_} } $subset{detection_classes}->list;
 
p %subset;
 
 
my $plot_output_path = 'objects-detected.png';
my $gp = gpwin('pngcairo', font => ",12", output => $plot_output_path, aa => 2, size => [10] );
 
my @qual_cmap = ('#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6');
 
$gp->options(
    map {
        my $idx = $_;
        my $lc_rgb = $qual_cmap[ $subset{detection_classes}->slice("($idx)")->squeeze % @qual_cmap ];
 
        my $box_corners_yx_norm = $subset{detection_boxes}->slice([],$idx,[0,0,0]);
        $box_corners_yx_norm->reshape(2,2);
 
        my $box_corners_yx_img = $box_corners_yx_norm * $pdl_images[0]->shape->slice('-1:-2');
 
        my $from_xy = join ",", $box_corners_yx_img->slice('-1:0,(0)')->list;
        my $to_xy   = join ",", $box_corners_yx_img->slice('-1:0,(1)')->list;
        my $label_xy = join ",", $box_corners_yx_img->at(1,1), $box_corners_yx_img->at(0,1);
 
        (
            [ object => [ "rect" =>
                from => $from_xy, to => $to_xy,
                qq{front fs empty border lc rgb "$lc_rgb" lw 5} ], ],
            [ label => [
                sprintf("%s: %.1f",
                    $subset{detection_class_labels}[$idx],
                    100*$subset{detection_scores}->at($idx,0) ) =>
                at => $label_xy, 'left',
                offset => 'character 0,-0.25',
                qq{font ",12" boxed front tc rgb "#ffffff"} ], ],
        )
    } 0..$subset{detection_boxes}->dim(1)-1
);
 
$gp->plot(
    topcmds => q{set style textbox opaque fc "#505050f0" noborder},
    square => 1,

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
=pod
 
=encoding UTF-8
 
=head1 NAME
 
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubCenterNetObjDetect - Using TensorFlow to do object detection using a pre-trained model
 
=head1 SYNOPSIS
 
The following tutorial is based on the L<TensorFlow Hub Object Detection Colab notebook|https://www.tensorflow.org/hub/tutorials/tf2_object_detection>. It uses a pre-trained model based on the I<CenterNet> architecture trained on the I<COCO 2017> dat...
 
Some of this code is identical to that of C<InferenceUsingTFHubMobileNetV2Model> notebook. Please look there for an explanation for that code. As stated there, this will later be wrapped up into a high-level library to hide the details behind an API.
 
=head1 COLOPHON
 
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
 
=over
 
=item *
 
C<PDL::Graphics::Gnuplot> requires C<gnuplot>.
 
=back
 
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.
 
=head1 TUTORIAL
 
=head2 Load the library
 
First, we need to load the C<AI::TensorFlow::Libtensorflow> library and more helpers. We then create an C<AI::TensorFlow::Libtensorflow::Status> object and helper function to make sure that the calls to the C<libtensorflow> C library are working prop...

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
      },
  );
   
  my $model_name = 'centernet_hourglass_512x512';
   
  say "Selected model: $model_name : $model_name_to_params{$model_name}{handle}";
 
We download the model to the current directory and then extract the model to a folder with the name given in C<$model_base>.
 
  my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
  $model_uri->query_form( 'tf-hub-format' => 'compressed' );
  my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
  my $model_archive_path = "${model_base}.tar.gz";
   
  my $http = HTTP::Tiny->new;
   
  for my $download ( [ $model_uri  => $model_archive_path ],) {
      my ($uri, $path) = @$download;
      say "Downloading $uri to $path";
      next if -e $path;
      $http->mirror( $uri, $path );

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
  my $ae = Archive::Extract->new( archive => $model_archive_path );
  die "Could not extract archive" unless $ae->extract( to => $model_base );
   
  my $saved_model = path($model_base)->child('saved_model.pb');
  say "Saved model is in $saved_model" if -f $saved_model;
 
We need to download the COCO 2017 classification labels and parse out the mapping from the numeric index to the textual descriptions.
 
  # Get the labels
   
  my %labels_map = $response->{content} =~ m<
  (?:item \s+ \{  \s+
    \Qname:\E \s+ "[^"]+" \s+
    \Qid:\E   \s+ (\d+) \s+
    \Qdisplay_name:\E \s+ "([^"]+)" \s+
  })+
  >sgx;
   
  my $label_count = List::Util::max keys %labels_map;
   
  say "We have a label count of $label_count. These labels include: ",

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
=item -
 
C<detection_boxes>: a C<tf.float32> tensor of shape [N, 4] containing bounding box coordinates in the following order: [ymin, xmin, ymax, xmax].
 
=item -
 
C<detection_classes>: a C<tf.int> tensor of shape [N] containing detection class index from the label file.
 
=item -
 
C<detection_scores>: a C<tf.float32> tensor of shape [N] containing detection scores.
 
=back
 
=back
 
Note that the above documentation has two errors: both C<num_detections> and C<detection_classes> are not of type C<tf.int>, but are actually C<tf.float32>.
 
Now we can load the model from that folder with the tag set C<[ 'serve' ]> by using the C<LoadFromSavedModel> constructor to create a C<::Graph> and a C<::Session> for that graph.
 
  my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
        op   =>  $graph->OperationByName('serving_default_input_tensor'),
        dict => {
            input_tensor => 0,
        }
    },
    out => {
        op => $graph->OperationByName('StatefulPartitionedCall'),
        dict => {
            detection_boxes   => 0,
            detection_classes => 1,
            detection_scores  => 2,
            num_detections    => 3,
        }
    },
);
 
my %outputs;
 
%outputs = map {
    my $put_type = $_;
    my $op = $ops{$put_type}{op};

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
       }
  } keys %ops;
   
  p %outputs;
 
Now we can get the following testing image from GitHub.
 
  use HTML::Tiny;
   
  my %images_for_test_to_uri = (
  );
   
  my @image_names = sort keys %images_for_test_to_uri;
  my $h = HTML::Tiny->new;
   
  my $image_name = 'beach_scene';
  if( IN_IPERL ) {
      IPerl->html(
          $h->a( { href => $images_for_test_to_uri{$image_name} },
              $h->img({
                  src => $images_for_test_to_uri{$image_name},
                  alt => $image_name,
                  width => '100%',
              })
          ),
      );
  }
 
=head2 Download the test image and transform it into suitable input data
 
We now fetch the image and prepare it to be in the needed format by using C<Imager>. Note that this model does not need the input image to be of a certain size so no resizing or padding is required.
 
Then we turn the C<Imager> data into a C<PDL> ndarray. Since we just need the 3 channels of the image as they are, they can be stored directly in a C<PDL> ndarray of type C<byte>.
 
The reason why we need to concatenate the C<PDL> ndarrays here despite the model only taking a single image at a time is to get an ndarray with four (4) dimensions with the last C<PDL> dimension of size one (1).
 
  sub load_image_to_pdl {
      my ($uri, $image_size) = @_;
   
      my $http = HTTP::Tiny->new;
      my $response = $http->get( $uri );
      die "Could not fetch image from $uri" unless $response->{success};
      say "Downloaded $uri";
   
      my $img = Imager->new;
      $img->read( data => $response->{content} );
   
      # Create PDL ndarray from Imager data in-memory.
      my $data;
      $img->write( data => \$data, type => 'raw' )
          or die "could not write ". $img->errstr;
   
      die "Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
          if $img->getchannels != 3;
   
      # $data is packed as PDL->dims == [w,h] with RGB pixels

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
  my $tftensor_output_by_name = $RunSession->($session, $t);
   
  my %pdl_output_by_name = map {
      $_ => FloatTFTensorToPDL( $tftensor_output_by_name->{$_} )
  } keys $tftensor_output_by_name->%*;
   
  undef;
 
=head2 Results summary
 
Then we use a score threshold to select the objects of interest.
 
  my $min_score_thresh = 0.30;
   
  my $which_detect = which( $pdl_output_by_name{detection_scores} > $min_score_thresh );
   
  my %subset;
   
  $subset{detection_boxes}   = $pdl_output_by_name{detection_boxes}->dice('X', $which_detect);
  $subset{detection_classes} = $pdl_output_by_name{detection_classes}->dice($which_detect);
  $subset{detection_scores}  = $pdl_output_by_name{detection_scores}->dice($which_detect);
   
  $subset{detection_class_labels}->@* = map { $labels_map{$_} } $subset{detection_classes}->list;
   
  p %subset;
 
The following uses the bounding boxes and class label information to draw boxes and labels on top of the image using Gnuplot.
 
  use PDL::Graphics::Gnuplot;
   
  my $plot_output_path = 'objects-detected.png';
  my $gp = gpwin('pngcairo', font => ",12", output => $plot_output_path, aa => 2, size => [10] );
   
  my @qual_cmap = ('#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6');
   
  $gp->options(
      map {
          my $idx = $_;
          my $lc_rgb = $qual_cmap[ $subset{detection_classes}->slice("($idx)")->squeeze % @qual_cmap ];
   
          my $box_corners_yx_norm = $subset{detection_boxes}->slice([],$idx,[0,0,0]);
          $box_corners_yx_norm->reshape(2,2);
   
          my $box_corners_yx_img = $box_corners_yx_norm * $pdl_images[0]->shape->slice('-1:-2');
   
          my $from_xy = join ",", $box_corners_yx_img->slice('-1:0,(0)')->list;
          my $to_xy   = join ",", $box_corners_yx_img->slice('-1:0,(1)')->list;
          my $label_xy = join ",", $box_corners_yx_img->at(1,1), $box_corners_yx_img->at(0,1);
   
          (
              [ object => [ "rect" =>
                  from => $from_xy, to => $to_xy,
                  qq{front fs empty border lc rgb "$lc_rgb" lw 5} ], ],
              [ label => [
                  sprintf("%s: %.1f",
                      $subset{detection_class_labels}[$idx],
                      100*$subset{detection_scores}->at($idx,0) ) =>
                  at => $label_xy, 'left',
                  offset => 'character 0,-0.25',
                  qq{font ",12" boxed front tc rgb "#ffffff"} ], ],
          )
      } 0..$subset{detection_boxes}->dim(1)-1
  );
   
  $gp->plot(
      topcmds => q{set style textbox opaque fc "#505050f0" noborder},
      square => 1,

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
  use Filesys::DiskUsage qw/du/;
   
  my $total = du( { 'human-readable' => 1, dereference => 1 },
      $model_archive_path, $model_base );
   
  say "Disk space usage: $total"; undef;
 
=head1 CPANFILE
 
  requires 'AI::TensorFlow::Libtensorflow';
  requires 'AI::TensorFlow::Libtensorflow::DataType';
  requires 'Archive::Extract';
  requires 'Data::Printer';
  requires 'Data::Printer::Filter::PDL';
  requires 'FFI::Platypus::Buffer';
  requires 'FFI::Platypus::Memory';
  requires 'File::Which';
  requires 'Filesys::DiskUsage';
  requires 'HTML::Tiny';
  requires 'HTTP::Tiny';
  requires 'Imager';
  requires 'List::Util', '1.56';
  requires 'PDL';
  requires 'PDL::Graphics::Gnuplot';
  requires 'Path::Tiny';
  requires 'Syntax::Construct';
  requires 'Text::Table::Tiny';
  requires 'URI';
  requires 'constant';
  requires 'feature';
  requires 'lib::projectroot';
  requires 'strict';
  requires 'utf8';
  requires 'warnings';
 
=head1 AUTHOR
 
Zakariyya Mughal <zmughal@cpan.org>
 
=head1 COPYRIGHT AND LICENSE
 
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
 
This is free software, licensed under:

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
    memcpy scalar_to_pointer( ${$pdl->get_dataref} ),
        scalar_to_pointer( ${$t->Data} ),
        $t->ByteSize;
    $pdl->upd_data;
 
    $pdl;
}
 
# Model handle
my $model_uri = URI->new( 'https://tfhub.dev/deepmind/enformer/1' );
$model_uri->query_form( 'tf-hub-format' => 'compressed' );
my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
my $model_archive_path = "${model_base}.tar.gz";
my $model_sequence_length = 393_216; # bp
 
# Human targets from Basenji2 dataset
my $targets_path = 'targets_human.txt';
 
# Human reference genome

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
    die "Could not extract archive" unless $ae->extract( to => $model_base );
}
 
use Digest::file qw(digest_file_hex);
if( digest_file_hex( $hg_gz_path, "MD5" ) eq $hg_md5_digest ) {
    say "MD5 sum for $hg_gz_path OK";
} else {
    die "Digest for $hg_gz_path failed";
}
 
(my $hg_uncompressed_path = $hg_gz_path) =~ s/\.gz$//;
my $hg_bgz_path = "${hg_uncompressed_path}.bgz";
 
 
if( ! -e $hg_bgz_path ) {
    IPC::Run::run(
        [ qw(gunzip -c) ], '<', $hg_gz_path,
        '|',
        [ qw(bgzip -c)  ], '>', $hg_bgz_path
    );
}

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
 
my $hg_bgz_fai_path = "${hg_bgz_path}.fai";
if( ! -e $hg_bgz_fai_path ) {
    my $faidx_tool = Bio::Tools::Run::Samtools->new( -command => 'faidx' );
    $faidx_tool->run( -fas => $hg_bgz_path )
        or die "Could not index FASTA file $hg_bgz_path: " . $faidx_tool->error_string;
}
 
sub saved_model_cli {
    my (@rest) = @_;
    if( File::Which::which('saved_model_cli')) {
        system(qw(saved_model_cli), @rest ) == 0
            or die "Could not run saved_model_cli";
    } else {
        warn "saved_model_cli(): Install the tensorflow Python package to get the `saved_model_cli` command.\n";
        return -1;
    }
}
 
say "Checking with saved_model_cli scan:";
saved_model_cli( qw(scan),
    qw(--dir) => $model_base,

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
 
sub center {
    my $self = shift;
    my $center = int( ($self->start + $self->end ) / 2 );
    my $delta = ($self->start + $self->end ) % 2;
    return $center + $delta;
}
 
sub resize {
    my ($self, $width) = @_;
    my $new_interval = $self->clone;
 
    my $center = $self->center;
    my $half   = int( ($width-1) / 2 );
    my $offset = ($width-1) % 2;
 
    $new_interval->start( $center - $half - $offset );
    $new_interval->end(   $center + $half  );

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
    use overload '""' => \&_op_stringify;
 
    sub _op_stringify { sprintf "%s:%s", $_[0]->seq_id // "(no sequence)", $_[0]->to_FTstring }
}
 
#####
 
{
 
say "Testing interval resizing:\n";
sub _debug_resize {
    my ($interval, $to, $msg) = @_;
 
    my $resized_interval = $interval->resize($to);
 
    die "Wrong interval size for $interval --($to)--> $resized_interval"
        unless $resized_interval->length == $to;
 
    say sprintf "Interval: %s -> %s, length %2d : %s",
        $interval,
        $resized_interval, $resized_interval->length,
        $msg;
}
 
for my $interval_spec ( [4, 8], [5, 8], [5, 9], [6, 9]) {
    my ($start, $end) = @$interval_spec;
    my $test_interval = Interval->new( -seq_id => 'chr11', -start => $start, -end => $end );
    say sprintf "Testing interval %s with length %d", $test_interval, $test_interval->length;
    say "-----";
    for(0..5) {
        my $base = $test_interval->length;
        my $to = $base + $_;
        _debug_resize $test_interval, $to, "$base -> $to (+ $_)";
    }
    say "";
}
 
}
 
undef;
 

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
say "1 base: ",   seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 35_082_742 + 1,
            -end   => 35_082_742 + 1 ) );
 
say "3 bases: ",  seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 1,
            -end   => 1 )->resize(3) );
 
say "5 bases: ", seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => $hg_db->length('chr11'),
            -end   => $hg_db->length('chr11') )->resize(5) );
 
say "chr11 is of length ", $hg_db->length('chr11');
say "chr11 bases: ", seq_info
    extract_sequence( $hg_db,
        Interval->new( -seq_id => 'chr11',
            -start => 1,
            -end   => $hg_db->length('chr11') )->resize( $hg_db->length('chr11') ) );
}
 
my $target_interval = Interval->new( -seq_id => 'chr11',
    -start => 35_082_742 +  1, # BioPerl is 1-based
    -end   => 35_197_430 );
 
say "Target interval: $target_interval with length @{[ $target_interval->length ]}";
 
die "Target interval is not $model_central_base_pairs_length bp long"
    unless $target_interval->length == $model_central_base_pairs_length;
 
say "Target sequence is ", seq_info extract_sequence( $hg_db, $target_interval );
 
 
say "";
 
 
my $resized_interval = $target_interval->resize( $model_sequence_length );
say "Resized interval: $resized_interval with length @{[ $resized_interval->length ]}";
 
die "resize() is not working properly!" unless $resized_interval->length == $model_sequence_length;
 
my $seq = extract_sequence( $hg_db, $resized_interval );
 
say "Resized sequence is ", seq_info($seq);
 
my $sequence_one_hot = one_hot_dna( $seq )->dummy(-1);
 
say $sequence_one_hot->info; undef;
 
my $t = Devel::Timer->new;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
say "Disk space usage: $total"; undef;
 
__END__
 
=pod
 
=encoding UTF-8
 
=head1 NAME
 
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubEnformerGeneExprPredModel - Using TensorFlow to do gene expression prediction using a pre-trained model
 
=head1 SYNOPSIS
 
The following tutorial is based on the L<Enformer usage notebook|https://github.com/deepmind/deepmind-research/blob/master/enformer/enformer-usage.ipynb>. It uses a pre-trained model based on a transformer architecture trained as described in Avsec e...
 
Running the code requires an Internet connection to download the model (from Google servers) and datasets (from GitHub, UCSC, and NIH).
 
Some of this code is identical to that of C<InferenceUsingTFHubMobileNetV2Model> notebook. Please look there for explanation for that code. As stated there, this will later be wrapped up into a high-level library to hide the details behind an API.
 
B<NOTE>: If running this model, please be aware that
 
=over
 
=item *
 
the Docker image takes 3 GB or more of disk space;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
=head1 COLOPHON
 
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
 
You will also need the executables C<gunzip>, C<bgzip>, and C<samtools>. Furthermore,
 
=over
 
=item *
 
C<Bio::DB::HTS> requires C<libhts> and
 
=item *
 
C<PDL::Graphics::Gnuplot> requires C<gnuplot>.
 
=back
 
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.
 
=head1 TUTORIAL
 
=head2 Load the library
 
First, we need to load the C<AI::TensorFlow::Libtensorflow> library and more helpers. We then create an C<AI::TensorFlow::Libtensorflow::Status> object and helper function to make sure that the calls to the C<libtensorflow> C library are working prop...

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
  }
 
=head2 Download model and data
 
=over
 
=item *
 
 
  > Avsec Ž, Agarwal V, Visentin D, Ledsam JR, Grabska-Barwinska A, Taylor KR, Assael Y, Jumper J, Kohli P, Kelley DR. Effective gene expression prediction from sequence by integrating long-range interactions. I<Nat Methods>. 2021 Oct;B<18(10)>:1196...
 
=item *
 
 
  > Kelley DR. Cross-species regulatory sequence activity prediction. I<PLoS Comput Biol>. 2020 Jul 20;B<16(7)>:e1008050. doi: L<10.1371/journal.pcbi.1008050|https://doi.org/10.1371/journal.pcbi.1008050>. PMID: L<32687525|https://pubmed.ncbi.nlm.nih....
 
=item *
 

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
=item *
 
 
  > Landrum MJ, Lee JM, Benson M, Brown GR, Chao C, Chitipiralla S, Gu B, Hart J, Hoffman D, Jang W, Karapetyan K, Katz K, Liu C, Maddipatla Z, Malheiro A, McDaniel K, Ovetsky M, Riley G, Zhou G, Holmes JB, Kattman BL, Maglott DR. ClinVar: improving ...
 
=back
 
  # Model handle
  my $model_uri = URI->new( 'https://tfhub.dev/deepmind/enformer/1' );
  $model_uri->query_form( 'tf-hub-format' => 'compressed' );
  my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
  my $model_archive_path = "${model_base}.tar.gz";
  my $model_sequence_length = 393_216; # bp
   
  # Human targets from Basenji2 dataset
  my $targets_path = 'targets_human.txt';
   
  # Human reference genome

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
                     [ $hg_uri      => $hg_gz_path            ],
                     [ $clinvar_uri => $clinvar_path       ],) {
      my ($uri, $path) = @$download;
      say "Downloading $uri to $path";
      next if -e $path;
      $http->mirror( $uri, $path );
  }
 
B<STREAM (STDOUT)>:
 
  Downloading https://tfhub.dev/deepmind/enformer/1?tf-hub-format=compressed to deepmind_enformer_1.tar.gz
 
Now we
 
=over
 
=item 1.

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
=item 1.
 
convert the gzip'd file into a block gzip'd file and
 
=item 2.
 
index that C<.bgz> file using C<faidx> from C<samtools>.
 
=back
 
  (my $hg_uncompressed_path = $hg_gz_path) =~ s/\.gz$//;
  my $hg_bgz_path = "${hg_uncompressed_path}.bgz";
   
  use IPC::Run;
   
  if( ! -e $hg_bgz_path ) {
      IPC::Run::run(
          [ qw(gunzip -c) ], '<', $hg_gz_path,
          '|',
          [ qw(bgzip -c)  ], '>', $hg_bgz_path
      );
  }

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
      my $faidx_tool = Bio::Tools::Run::Samtools->new( -command => 'faidx' );
      $faidx_tool->run( -fas => $hg_bgz_path )
          or die "Could not index FASTA file $hg_bgz_path: " . $faidx_tool->error_string;
  }
 
=head2 Model input and output specification
 
Now we create a helper to call C<saved_model_cli> and called C<saved_model_cli scan> to ensure that the model is I/O-free for security reasons.
 
  sub saved_model_cli {
      my (@rest) = @_;
      if( File::Which::which('saved_model_cli')) {
          system(qw(saved_model_cli), @rest ) == 0
              or die "Could not run saved_model_cli";
      } else {
          warn "saved_model_cli(): Install the tensorflow Python package to get the `saved_model_cli` command.\n";
          return -1;
      }
  }
   
  say "Checking with saved_model_cli scan:";
  saved_model_cli( qw(scan),
      qw(--dir) => $model_base,

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
the output C<human> which has the name C<StatefulPartitionedCall:0>.
 
=back
 
all of which are C<DT_FLOAT>.
 
Make note of the shapes that those take. Per the L<model description|https://tfhub.dev/deepmind/enformer/1> at TensorFlow Hub:
 
=over 2
 
The input sequence length is 393,216 with the prediction corresponding to 128 base pair windows for the center 114,688 base pairs. The input sequence is one hot encoded using the order of indices corresponding to 'ACGT' with N values being all zeros.
 
=back
 
The input shape C<(-1, 393216, 4)> thus represents dimensions C<[batch size] x [sequence length] x [one-hot encoding of ACGT]>.
 
The output shape C<(-1, 896, 5313)> represents dimensions C<[batch size] x [ predictions along 114,688 base pairs / 128 base pair windows ] x [ human target by index ]>. We can confirm this by doing some calculations:
 
  my $model_central_base_pairs_length     = 114_688; # bp
  my $model_central_base_pair_window_size = 128;     # bp / prediction
   
  say "Number of predictions: ", $model_central_base_pairs_length / $model_central_base_pair_window_size;
 
B<STREAM (STDOUT)>:
 
  Number of predictions: 896

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
    
      return $outputs_t[0];
  };
   
  undef;
 
=head2 Encoding the data
 
The model specifies that the way to get a sequence of DNA bases into a C<TFTensor> is to use L<one-hot encoding|https://en.wikipedia.org/wiki/One-hot#Machine_learning_and_statistics> in the order C<ACGT>.
 
This means that the bases are represented as vectors of length 4:
 
| base | vector encoding |
|------|-----------------|
| A    | C<[1 0 0 0]>     |
| C    | C<[0 1 0 0]>     |
| G    | C<[0 0 1 0]>     |
| T    | C<[0 0 0 1]>     |
| N    | C<[0 0 0 0]>     |
 
We can achieve this encoding by creating a lookup table with a PDL ndarray. This could be done by creating a byte PDL ndarray of dimensions C<[ 256 4 ]> to directly look up the the numeric value of characters 0-255, but here we'll go with a smaller C...

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
 
sub center {
    my $self = shift;
    my $center = int( ($self->start + $self->end ) / 2 );
    my $delta = ($self->start + $self->end ) % 2;
    return $center + $delta;
}
 
sub resize {
    my ($self, $width) = @_;
    my $new_interval = $self->clone;
 
    my $center = $self->center;
    my $half   = int( ($width-1) / 2 );
    my $offset = ($width-1) % 2;
 
    $new_interval->start( $center - $half - $offset );
    $new_interval->end(   $center + $half  );

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
    
      use overload '""' => \&_op_stringify;
   
      sub _op_stringify { sprintf "%s:%s", $_[0]->seq_id // "(no sequence)", $_[0]->to_FTstring }
  }
   
  #####
   
  {
   
  say "Testing interval resizing:\n";
  sub _debug_resize {
      my ($interval, $to, $msg) = @_;
   
      my $resized_interval = $interval->resize($to);
   
      die "Wrong interval size for $interval --($to)--> $resized_interval"
          unless $resized_interval->length == $to;
   
      say sprintf "Interval: %s -> %s, length %2d : %s",
          $interval,
          $resized_interval, $resized_interval->length,
          $msg;
  }
   
  for my $interval_spec ( [4, 8], [5, 8], [5, 9], [6, 9]) {
      my ($start, $end) = @$interval_spec;
      my $test_interval = Interval->new( -seq_id => 'chr11', -start => $start, -end => $end );
      say sprintf "Testing interval %s with length %d", $test_interval, $test_interval->length;
      say "-----";
      for(0..5) {
          my $base = $test_interval->length;
          my $to = $base + $_;
          _debug_resize $test_interval, $to, "$base -> $to (+ $_)";
      }
      say "";
  }
   
  }
   
  undef;
 
B<STREAM (STDOUT)>:
 
  Testing interval resizing:
   
  Testing interval chr11:4..8 with length 5
  -----
  Interval: chr11:4..8 -> chr11:4..8, length  5 : 5 -> 5 (+ 0)
  Interval: chr11:4..8 -> chr11:3..8, length  6 : 5 -> 6 (+ 1)
  Interval: chr11:4..8 -> chr11:3..9, length  7 : 5 -> 7 (+ 2)
  Interval: chr11:4..8 -> chr11:2..9, length  8 : 5 -> 8 (+ 3)
  Interval: chr11:4..8 -> chr11:2..10, length  9 : 5 -> 9 (+ 4)
  Interval: chr11:4..8 -> chr11:1..10, length 10 : 5 -> 10 (+ 5)
  

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
  say "1 base: ",   seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 35_082_742 + 1,
              -end   => 35_082_742 + 1 ) );
   
  say "3 bases: ",  seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 1,
              -end   => 1 )->resize(3) );
   
  say "5 bases: ", seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => $hg_db->length('chr11'),
              -end   => $hg_db->length('chr11') )->resize(5) );
   
  say "chr11 is of length ", $hg_db->length('chr11');
  say "chr11 bases: ", seq_info
      extract_sequence( $hg_db,
          Interval->new( -seq_id => 'chr11',
              -start => 1,
              -end   => $hg_db->length('chr11') )->resize( $hg_db->length('chr11') ) );
  }
 
B<STREAM (STDOUT)>:
 
  Testing sequence extraction:
  1 base: G (length 1)
  3 bases: NNN (length 3)
  5 bases: NNNNN (length 5)
  chr11 is of length 135086622
  chr11 bases: NNNNNNNNNN...NNNNNNNNNN (length 135086622)
 
B<RESULT>:
 
  1
 
Now we can use the same target interval that is used in the example notebook which recreates part of L<figure 1|https://www.nature.com/articles/s41592-021-01252-x/figures/1> from the Enformer paper.
 
  my $target_interval = Interval->new( -seq_id => 'chr11',
      -start => 35_082_742 +  1, # BioPerl is 1-based
      -end   => 35_197_430 );
   
  say "Target interval: $target_interval with length @{[ $target_interval->length ]}";
   
  die "Target interval is not $model_central_base_pairs_length bp long"
      unless $target_interval->length == $model_central_base_pairs_length;
   
  say "Target sequence is ", seq_info extract_sequence( $hg_db, $target_interval );
   
   
  say "";
   
   
  my $resized_interval = $target_interval->resize( $model_sequence_length );
  say "Resized interval: $resized_interval with length @{[ $resized_interval->length ]}";
   
  die "resize() is not working properly!" unless $resized_interval->length == $model_sequence_length;
   
  my $seq = extract_sequence( $hg_db, $resized_interval );
   
  say "Resized sequence is ", seq_info($seq);
 
B<STREAM (STDOUT)>:
 
  Target interval: chr11:35082743..35197430 with length 114688
  Target sequence is GGTGGCAGCC...ATCTCCTTTT (length 114688)
   
  Resized interval: chr11:34943479..35336694 with length 393216
  Resized sequence is ACTAGTTCTA...GGCCCAAATC (length 393216)
 
B<RESULT>:
 
  1
 
To prepare the input we have to one-hot encode this resized sequence and give it a dummy dimension at the end to indicate that it is is a batch with a single sequence. Then we can turn the PDL ndarray into a C<TFTensor> and pass it to our prediction ...
 
  my $sequence_one_hot = one_hot_dna( $seq )->dummy(-1);
   
  say $sequence_one_hot->info; undef;
 
B<STREAM (STDOUT)>:
 
  PDL: Float D [4,393216,1]

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
  $gp->end_multi;
   
  $gp->close;
   
  if( IN_IPERL ) {
      IPerl->png( bytestream => path($plot_output_path)->slurp_raw );
  }
 
B<DISPLAY>:
 
=for html <span style="display:inline-block;margin-left:1em;"><p><img                                            src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAMgCAIAAAA/et9qAAAgAElEQVR4nOzdd2AUVeIH8Ddb0jshBAIEpSo1GjoIpyAgCOqd3uGdoGBBUQQFRUVBRbkTf9gOBQucqFiwUhSSgJQYCCSBkJBAet1k...
 
=head2 Parts of the original notebook that fall outside the scope
 
In the orignal notebook, there are several more steps that have not been ported here:
 
=over
 
=item 1.
 
"Compute contribution scores":
 
This task requires implementing C<@tf.function> to compile gradients.
 
=item 2.
 
"Predict the effect of a genetic variant" and "Score multiple variants":
 
The first task is possible, but the second task requires loading a pre-processing pipeline for scikit-learn and unfortunately this pipeline is stored as a pickle file that is valid for an older version of scikit-learn (version 0.23.2) and as such its...
 
=back
 
  # Some code that could be used for working with variants.
  1 if <<'COMMENT';
   
  use Bio::DB::HTS::VCF;
   
  my $clinvar_tbi_path = "${clinvar_path}.tbi";
  unless( -f $clinvar_tbi_path ) {

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod  view on Meta::CPAN

1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
  );
   
  say "Disk space usage: $total"; undef;
 
B<STREAM (STDOUT)>:
 
  Disk space usage: 4.66G
 
=head1 CPANFILE
 
  requires 'AI::TensorFlow::Libtensorflow';
  requires 'AI::TensorFlow::Libtensorflow::DataType';
  requires 'Archive::Extract';
  requires 'Bio::DB::HTS::Faidx';
  requires 'Bio::Location::Simple';
  requires 'Bio::Tools::Run::Samtools';
  requires 'Data::Frame';
  requires 'Data::Printer';
  requires 'Data::Printer::Filter::PDL';
  requires 'Devel::Timer';
  requires 'Digest::file';
  requires 'FFI::Platypus::Buffer';
  requires 'FFI::Platypus::Memory';
  requires 'File::Which';
  requires 'Filesys::DiskUsage';
  requires 'HTTP::Tiny';
  requires 'IPC::Run';
  requires 'List::Util';
  requires 'PDL';
  requires 'PDL::Graphics::Gnuplot';
  requires 'Path::Tiny';
  requires 'Syntax::Construct';
  requires 'Text::Table::Tiny';
  requires 'URI';
  requires 'constant';
  requires 'feature';
  requires 'lib::projectroot';
  requires 'overload';
  requires 'parent';
  requires 'strict';
  requires 'utf8';
  requires 'warnings';
 
=head1 AUTHOR
 
Zakariyya Mughal <zmughal@cpan.org>
 
=head1 COPYRIGHT AND LICENSE
 
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
 
This is free software, licensed under:

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
        image_size => [ 224, 224 ],
    },
);
 
my $model_name = 'mobilenet_v2_100_224';
 
say "Selected model: $model_name : $model_name_to_params{$model_name}{handle}";
 
my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
$model_uri->query_form( 'tf-hub-format' => 'compressed' );
my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
my $model_archive_path = "${model_base}.tar.gz";
 
use constant IMAGENET_LABEL_COUNT_WITH_BG => 1001;
my $labels_path = ($labels_uri->path_segments)[-1];
 
my $http = HTTP::Tiny->new;
 
for my $download ( [ $model_uri  => $model_archive_path ],

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
                        alt => $image_name,
                        width => '50%',
                    })
                ),
            )
        })
    );
}
 
sub imager_paste_center_pad {
    my ($inner, $padded_sz, @rest) = @_;
 
    my $outer = Imager->new( List::Util::mesh( [qw(xsize ysize)], $padded_sz ),
        @rest
    );
 
    $outer->paste(
        left => int( ($outer->getwidth  - $inner->getwidth ) / 2 ),
        top  => int( ($outer->getheight - $inner->getheight) / 2 ),
        src  => $inner,
    );
 
    $outer;
}
 
sub imager_scale_to {
    my ($img, $image_size) = @_;
    my $rescaled = $img->scale(
        List::Util::mesh( [qw(xpixels ypixels)], $image_size ),
        type => 'min',
        qtype => 'mixing', # 'mixing' seems to work better than 'normal'
    );
}
 
sub load_image_to_pdl {
    my ($uri, $image_size) = @_;
 
    my $http = HTTP::Tiny->new;
    my $response = $http->get( $uri );
    die "Could not fetch image from $uri" unless $response->{success};
    say "Downloaded $uri";
 
    my $img = Imager->new;
    $img->read( data => $response->{content} );
 
    my $rescaled = imager_scale_to($img, $image_size);
 
    say sprintf "Rescaled image from [ %d x %d ] to [ %d x %d ]",
        $img->getwidth, $img->getheight,
        $rescaled->getwidth, $rescaled->getheight;
 
    my $padded = imager_paste_center_pad($rescaled, $image_size,
        # ARGB fits in 32-bits (uint32_t)
        channels => 4
    );
 
    say sprintf "Padded to [ %d x %d ]", $padded->getwidth, $padded->getheight;
 
    # Create PDL ndarray from Imager data in-memory.
    my $data;
    $padded->write( data => \$data, type => 'raw' )
        or die "could not write ". $padded->errstr;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
=pod
 
=encoding UTF-8
 
=head1 NAME
 
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model - Using TensorFlow to do image classification using a pre-trained model
 
=head1 SYNOPSIS
 
The following tutorial is based on the L<Image Classification with TensorFlow Hub notebook|https://github.com/tensorflow/docs/blob/master/site/en/hub/tutorials/image_classification.ipynb>. It uses a pre-trained model based on the I<MobileNet V2> arch...
 
Please look at the L<SECURITY note|https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md> regarding running models as models are programs. You can also used C<saved_model_cli scan> to check for L<security-sensitive "denylisted ops"|https:/...
 
If you would like to visualise a model, you can use L<Netron|https://github.com/lutzroeder/netron> on the C<.pb> file.
 
=head1 COLOPHON
 
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
 
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
 
B<RESULT>:
 
  1
 
We download the model and labels to the current directory then extract the model to a folder with the name given in C<$model_base>.
 
  my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
  $model_uri->query_form( 'tf-hub-format' => 'compressed' );
  my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
  my $model_archive_path = "${model_base}.tar.gz";
   
  use constant IMAGENET_LABEL_COUNT_WITH_BG => 1001;
  my $labels_path = ($labels_uri->path_segments)[-1];
   
  my $http = HTTP::Tiny->new;
   
  for my $download ( [ $model_uri  => $model_archive_path ],

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
  my $saved_model = path($model_base)->child('saved_model.pb');
  say "Saved model is in $saved_model" if -f $saved_model;
   
  my @labels = path($labels_path)->lines( { chomp => 1 });
  die "Labels should have @{[ IMAGENET_LABEL_COUNT_WITH_BG ]} items"
      unless @labels == IMAGENET_LABEL_COUNT_WITH_BG;
  say "Got labels: ", join( ", ", List::Util::head(5, @labels) ), ", etc.";
 
B<STREAM (STDOUT)>:
 
  Downloading https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/5?tf-hub-format=compressed to google_imagenet_mobilenet_v2_100_224_classification_5.tar.gz
  Saved model is in google_imagenet_mobilenet_v2_100_224_classification_5/saved_model.pb
  Got labels: background, tench, goldfish, great white shark, tiger shark, etc.
 
B<RESULT>:
 
  1
 
=head2 Load the model and session

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
  Method name is: tensorflow/serving/predict
 
B<RESULT>:
 
  1
 
The above C<saved_model_cli> output shows that the model input is at C<serving_default_inputs:0> which means the operation named C<serving_default_inputs> at index C<0> and the output is at C<StatefulPartitionedCall:0> which means the operation named...
 
It also shows the type and shape of the C<TFTensor>s for those inputs and outputs. Together this is known as a signature.
 
For the C<input>, we have C<(-1, 224, 224, 3)> which is a L<common input image specification for TensorFlow Hub|https://www.tensorflow.org/hub/common_signatures/images#input>. This is known as C<channels_last> (or C<NHWC>) layout where the TensorFlow...
 
For the C<output>, we have C<(-1, 1001)> which is C<[batch_size, num_classes]> where the elements are scores that the image received for that ImageNet class.
 
Now we can load the model from that folder with the tag set C<[ 'serve' ]> by using the C<LoadFromSavedModel> constructor to create a C<::Graph> and a C<::Session> for that graph.
 
  my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;
   
  my $graph = AI::TensorFlow::Libtensorflow::Graph->New;
  my $session = AI::TensorFlow::Libtensorflow::Session->LoadFromSavedModel(
      $opt, undef, $model_base, \@tags, $graph, undef, $s
  );
  AssertOK($s);

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
          })
      );
  }
 
B<DISPLAY>:
 
=for html <span style="display:inline-block;margin-left:1em;"><p><table style="width: 100%"><tr><td><tt>apple</tt></td><td><a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Red_Apple.jpg"><img alt="apple" src="https://upload.wikimedia.org/...
 
=head2 Download the test images and transform them into suitable input data
 
We now fetch these images and prepare them to be the in the needed format by using C<Imager> to resize and add padding. Then we turn the C<Imager> data into a C<PDL> ndarray. Since the C<Imager> data is stored as 32-bits with 4 channels in the order ...
 
We then take all the PDL ndarrays and concatenate them. Again, note that the dimension lists for the PDL ndarray and the TFTensor are reversed.
 
  sub imager_paste_center_pad {
      my ($inner, $padded_sz, @rest) = @_;
   
      my $outer = Imager->new( List::Util::mesh( [qw(xsize ysize)], $padded_sz ),
          @rest
      );
   
      $outer->paste(
          left => int( ($outer->getwidth  - $inner->getwidth ) / 2 ),
          top  => int( ($outer->getheight - $inner->getheight) / 2 ),
          src  => $inner,
      );
   
      $outer;
  }
   
  sub imager_scale_to {
      my ($img, $image_size) = @_;
      my $rescaled = $img->scale(
          List::Util::mesh( [qw(xpixels ypixels)], $image_size ),
          type => 'min',
          qtype => 'mixing', # 'mixing' seems to work better than 'normal'
      );
  }
   
  sub load_image_to_pdl {
      my ($uri, $image_size) = @_;
   
      my $http = HTTP::Tiny->new;
      my $response = $http->get( $uri );
      die "Could not fetch image from $uri" unless $response->{success};
      say "Downloaded $uri";
   
      my $img = Imager->new;
      $img->read( data => $response->{content} );
   
      my $rescaled = imager_scale_to($img, $image_size);
   
      say sprintf "Rescaled image from [ %d x %d ] to [ %d x %d ]",
          $img->getwidth, $img->getheight,
          $rescaled->getwidth, $rescaled->getheight;
   
      my $padded = imager_paste_center_pad($rescaled, $image_size,
          # ARGB fits in 32-bits (uint32_t)
          channels => 4
      );
   
      say sprintf "Padded to [ %d x %d ]", $padded->getwidth, $padded->getheight;
   
      # Create PDL ndarray from Imager data in-memory.
      my $data;
      $padded->write( data => \$data, type => 'raw' )
          or die "could not write ". $padded->errstr;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
B<STREAM (STDERR)>:
 
=for html <span style="display:inline-block;margin-left:1em;"><pre style="display: block"><code><span style="color: #cc66cc;">AI::TensorFlow::Libtensorflow::Tensor</span><span style=""> </span><span style="color: #33ccff;">{</span><span style="">
    </span><span style="color: #6666cc;">Type           </span><span style=""> </span><span style="color: #cc66cc;">FLOAT</span><span style="">
    </span><span style="color: #6666cc;">Dims           </span><span style=""> </span><span style="color: #33ccff;">[</span><span style=""> </span><span style="color: #ff6633;">1</span><span style=""> </span><span style="color: #ff6633;">1001</span><...
    </span><span style="color: #6666cc;">NumDims        </span><span style=""> </span><span style="color: #ff6633;">2</span><span style="">
    </span><span style="color: #6666cc;">ElementCount   </span><span style=""> </span><span style="color: #ff6633;">1001</span><span style="">
</span><span style="color: #33ccff;">}</span><span style="">
</span></code></pre></span>
 
Then we send the batched image data. The returned scores need to by normalised using the L<softmax function|https://en.wikipedia.org/wiki/Softmax_function> with the following formula (taken from Wikipedia):
 
$$ {\displaystyle \sigma (\mathbf {z} )I<{i}={\frac {e^{z>{i}}}{\sum I<{j=1}^{K}e^{z>{j}}}}\ \ {\text{ for }}i=1,\dotsc ,K{\text{ and }}\mathbf {z} =(zI<{1},\dotsc ,z>{K})\in \mathbb {R} ^{K}.} $$
 
  my $output_pdl_batched = FloatTFTensorToPDL($RunSession->($session, $t));
  my $softmax = sub { ( map $_/sumover($_)->dummy(0), exp($_[0]) )[0] };
  my $probabilities_batched = $softmax->($output_pdl_batched);
  p $probabilities_batched;
 
B<STREAM (STDERR)>:

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
                      $probabilities_batched->at($label_index,$batch_idx),
              ) ];
          }
          say generate_table( rows => [ $header, @rows ], header_row => 1 );
          print "\n";
      }
  }
 
B<DISPLAY>:
 
=for html <span style="display:inline-block;margin-left:1em;"><p><table style="width: 100%"><tr><td><tt>apple</tt></td><td><a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Red_Apple.jpg"><img alt="apple" src="https://upload.wikimedia.org/...
 
  my $p_approx_batched = $probabilities_batched->sumover->approx(1, 1e-5);
  p $p_approx_batched;
  say "All probabilities sum up to approximately 1" if $p_approx_batched->all->sclr;
 
B<STREAM (STDOUT)>:
 
  All probabilities sum up to approximately 1
 
B<STREAM (STDERR)>:

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod  view on Meta::CPAN

1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
  my @solid_channel_uris = (
  );
  undef;
 
=head1 CPANFILE
 
  requires 'AI::TensorFlow::Libtensorflow';
  requires 'AI::TensorFlow::Libtensorflow::DataType';
  requires 'Archive::Extract';
  requires 'Data::Printer';
  requires 'Data::Printer::Filter::PDL';
  requires 'FFI::Platypus::Buffer';
  requires 'FFI::Platypus::Memory';
  requires 'File::Which';
  requires 'Filesys::DiskUsage';
  requires 'HTML::Tiny';
  requires 'HTTP::Tiny';
  requires 'Imager';
  requires 'List::Util';
  requires 'PDL';
  requires 'PDL::GSL::RNG';
  requires 'Path::Tiny';
  requires 'Syntax::Construct';
  requires 'Text::Table::Tiny';
  requires 'URI';
  requires 'constant';
  requires 'feature';
  requires 'lib::projectroot';
  requires 'strict';
  requires 'utf8';
  requires 'warnings';
 
=head1 AUTHOR
 
Zakariyya Mughal <zmughal@cpan.org>
 
=head1 COPYRIGHT AND LICENSE
 
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
 
This is free software, licensed under:

lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod  view on Meta::CPAN

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
This provides a tour of C<libtensorflow> to help get started with using the
library.
 
=head1 CONVENTIONS
 
The library uses UpperCamelCase naming convention for method names in order to
match the underlying C library (for compatibility with future API changes) and
to make translating code from C easier as this is a low-level API.
 
As such, constructors for objects that correspond to C<libtensorflow> data
structures are typically called C<New>. For example, a new
L<AI::TensorFlow::Libtensorflow::Status> object can be created as follows
 
  use AI::TensorFlow::Libtensorflow::Status;
  my $status = AI::TensorFlow::Libtensorflow::Status->New;
 
  ok defined $status, 'Created new Status';
 
These C<libtensorflow> data structures use L<destructors|perlobj/Destructors> where necessary.
 
=head1 OBJECT TYPES
 
=over 4
 
=item L<AI::TensorFlow::Libtensorflow::Status>
 
Used for error-handling. Many methods take this as the final argument which is
then checked after the method call to ensure that it completed successfully.
 
=item L<AI::TensorFlow::Libtensorflow::Tensor>, L<AI::TensorFlow::Libtensorflow::DataType>
 
A C<TFTensor> is a multi-dimensional data structure that stores the data for inputs and outputs.
Each element has the same data type
which is defined by L<AI::TensorFlow::Libtensorflow::DataType>
thus a C<TFTensor> is considered to be "homogeneous data structure".
See L<Introduction to Tensors|https://www.tensorflow.org/guide/tensor> for more.
 
=item L<AI::TensorFlow::Libtensorflow::OperationDescription>, L<AI::TensorFlow::Libtensorflow::Operation>
 
An operation is a function that has inputs and outputs. It has a user-defined
name (such as C<MyAdder>) and library-defined type (such as C<AddN>).
L<AI::TensorFlow::Libtensorflow::OperationDescription> is used to build an

lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod  view on Meta::CPAN

78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
The object types in L</OBJECT TYPES> are used in the following tutorials:
 
=over 4
 
=item L<InferenceUsingTFHubMobileNetV2Model|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model>: image classification tutorial
 
This tutorial demonstrates using a pre-trained SavedModel and creating a L<AI::TensorFlow::Libtensorflow::Session> with the
L<LoadFromSavedModel|AI::TensorFlow::Libtensorflow::Session/LoadFromSavedModel>
method. It also demonstrates how to prepare image data for use as an input C<TFTensor>.
 
=item L<InferenceUsingTFHubEnformerGeneExprPredModel|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubEnformerGeneExprPredModel>: gene expression prediction tutorial
 
This tutorial builds on L<InferenceUsingTFHubMobileNetV2Model|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model>.
It shows how to convert a pre-trained SavedModel from one that does not have a
usable signature to a new model that does. It also demonstrates how to prepare
genomic data for use as an input C<TFTensor>.
 
=back
 
=head1 DOCKER IMAGES

lib/AI/TensorFlow/Libtensorflow/OperationDescription.pm  view on Meta::CPAN

23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
);
$ffi->load_custom_type(PackableArrayRef('BoolArrayRef', pack_type => 'C')
        => 'tf_attr_bool_list',
);
 
$ffi->attach( [ 'NewOperation' => 'New' ] => [
        arg 'TF_Graph' => 'graph',
        arg 'string'   => 'op_type',
        arg 'string'   => 'oper_name',
] => 'TF_OperationDescription' => sub {
        my ($xs, $class, @rest) = @_;
        $xs->(@rest);
});
 
$ffi->attach( [ 'NewOperationLocked' => 'NewLocked' ] => [
        arg 'TF_Graph' => 'graph',
        arg 'string'   => 'op_type',
        arg 'string'   => 'oper_name',
] => 'TF_OperationDescription' );
 
$ffi->attach( 'SetDevice' => [
        arg 'TF_OperationDescription' => 'desc',

lib/AI/TensorFlow/Libtensorflow/Session.pm  view on Meta::CPAN

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
 
$ffi->attach( [ 'NewSession' => 'New' ] =>
        [
                arg 'TF_Graph' => 'graph',
                arg 'TF_SessionOptions' => 'opt',
                arg 'TF_Status' => 'status',
        ],
        => 'TF_Session' => sub {
                my ($xs, $class, @rest) = @_;
                return $xs->(@rest);
        });
 
$ffi->attach( [ 'LoadSessionFromSavedModel' => 'LoadFromSavedModel' ] => [
    arg TF_SessionOptions => 'session_options',
    arg opaque => { id => 'run_options', ffi_type => 'TF_Buffer', maybe => 1 },
    arg string => 'export_dir',
    arg 'string[]' => 'tags',
    arg int => 'tags_len',
    arg TF_Graph => 'graph',
    arg opaque => { id => 'meta_graph_def', ffi_type => 'TF_Buffer', maybe => 1 },
    arg TF_Status => 'status',
] => 'TF_Session' => sub {
        my ($xs, $class, @rest) = @_;
        my ( $session_options,
                $run_options,
                $export_dir, $tags,
                $graph, $meta_graph_def,
                $status) = @rest;
 
 
        $run_options = $ffi->cast('TF_Buffer', 'opaque', $run_options)
                if defined $run_options;
        $meta_graph_def = $ffi->cast('TF_Buffer', 'opaque', $meta_graph_def)
                if defined $meta_graph_def;
 
        my $tags_len = @$tags;
 
        $xs->(

lib/AI/TensorFlow/Libtensorflow/Session.pm  view on Meta::CPAN

306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
=head2 LoadFromSavedModel
 
B<C API>: L<< C<TF_LoadSessionFromSavedModel>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_LoadSessionFromSavedModel >>
 
=head1 METHODS
 
=head2 Run
 
Run the graph associated with the session starting with the supplied
C<$inputs> with corresponding values in C<$input_values>.
 
The values at the outputs given by C<$outputs> will be placed in
C<$output_values>.
 
B<Parameters>
 
=over 4
 
=item Maybe[TFBuffer] $run_options
 
Optional C<TFBuffer> containing serialized representation of a `RunOptions` protocol buffer.
 
=item ArrayRef[TFOutput] $inputs
 
Inputs to set.
 
=item ArrayRef[TFTensor] $input_values
 
Values to assign to the inputs given by C<$inputs>.
 
=item ArrayRef[TFOutput] $outputs

lib/AI/TensorFlow/Libtensorflow/Session.pm  view on Meta::CPAN

342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
Reference to where the output values for C<$outputs> will be placed.
 
=item ArrayRef[TFOperation] $target_opers
 
TODO
 
=item Maybe[TFBuffer] $run_metadata
 
Optional empty C<TFBuffer> which will be updated to contain a serialized
representation of a `RunMetadata` protocol buffer.
 
=item L<TFStatus|AI::TensorFlow::Libtensorflow::Lib::Types/TFStatus> $status
 
Status
 
=back
 
B<C API>: L<< C<TF_SessionRun>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_SessionRun >>
 
=head2 PRunSetup

lib/AI/TensorFlow/Libtensorflow/TFLibrary.pm  view on Meta::CPAN

4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
use strict;
 
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
 
$ffi->attach( [ 'LoadLibrary' => 'LoadLibrary' ] => [
        arg string => 'library_filename',
        arg TF_Status => 'status',
] => 'TF_Library' => sub {
        my ($xs, $class, @rest) = @_;
        $xs->(@rest);
} );
 
$ffi->attach( [ 'GetOpList' => 'GetOpList' ] => [
        arg TF_Library => 'lib_handle'
] => 'TF_Buffer' );
 
$ffi->attach( [ 'DeleteLibraryHandle' => 'DESTROY' ] => [
        arg TF_Library => 'lib_handle'
] => 'void' );

lib/AI/TensorFlow/Libtensorflow/TFLibrary.pm  view on Meta::CPAN

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
  my $buf = AI::TensorFlow::Libtensorflow::TFLibrary->GetAllOpList();
  cmp_ok $buf->length, '>', 0, 'Got OpList buffer';
 
B<Returns>
 
=over 4
 
=item L<TFBuffer|AI::TensorFlow::Libtensorflow::Lib::Types/TFBuffer>
 
Contains a serialized C<OpList> proto for ops registered in this address space.
 
=back
 
B<C API>: L<< C<TF_GetAllOpList>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_GetAllOpList >>
 
=head1 METHODS
 
=head2 GetOpList
 
B<C API>: L<< C<TF_GetOpList>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_GetOpList >>

lib/AI/TensorFlow/Libtensorflow/Tensor.pm  view on Meta::CPAN

62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
# C: TF_AllocateTensor
#
# Constructor
$ffi->attach( [ 'AllocateTensor', 'Allocate' ],
        [
                arg 'TF_DataType'     => 'dtype',
                arg 'tf_dims_buffer'  => [ qw(dims num_dims) ],
                arg 'size_t'          => 'len',
        ],
        => 'TF_Tensor' => sub {
                my ($xs, $class, @rest) = @_;
                my ($dtype, $dims, $len) = @rest;
                if( ! defined $len ) {
                        $len = product($dtype->Size, @$dims);
                }
                my $obj = $xs->($dtype, $dims, $len);
        }
);
 
$ffi->attach( [ 'DeleteTensor' => 'DESTROY' ],
        [ arg 'TF_Tensor' => 't' ]
        => 'void'

lib/AI/TensorFlow/Libtensorflow/Tensor.pm  view on Meta::CPAN

87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
                if( exists $self->{_deallocator_closure} ) {
                        $self->{_deallocator_closure}->unstick;
                }
        }
);
 
$ffi->attach( [ 'TensorData' => 'Data' ],
        [ arg 'TF_Tensor' => 'self' ],
        => 'opaque'
        => sub {
                my ($xs, @rest) = @_;
                my ($self) = @rest;
                my $data_p = $xs->(@rest);
                window(my $buffer, $data_p, $self->ByteSize);
                \$buffer;
        }
);
 
$ffi->attach( [ 'TensorByteSize' => 'ByteSize' ],
        [ arg 'TF_Tensor' => 'self' ],
        => 'size_t'
);

lib/AI/TensorFlow/Libtensorflow/Tensor.pm  view on Meta::CPAN

262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
=head1 DESCRIPTION
 
A C<TFTensor> is an object that contains values of a
single type arranged in an n-dimensional array.
 
For types other than L<STRING|AI::TensorFlow::Libtensorflow::DataType/STRING>,
the data buffer is stored in L<row major order|https://en.wikipedia.org/wiki/Row-_and_column-major_order>.
 
Of note, this is different from the definition of I<tensor> used in
mathematics and physics which can also be represented as a
multi-dimensional array in some cases, but these tensors are
defined not by the representation but by how they transform. For
more on this see
 
=over 4
 
Lim, L.-H. (2021). L<Tensors in computations|https://galton.uchicago.edu/~lekheng/work/acta.pdf>.
Acta Numerica, 30, 555–764. Cambridge University Press.
 
=back
 
=head1 CONSTRUCTORS
 
=head2 New
 
=over 2

maint/cpanfile-git  view on Meta::CPAN

1
2
3
4
5
6
requires 'Alien::Libtensorflow',
        branch => 'master';
requires 'PDL',
        git => 'https://github.com/PDLPorters/pdl.git',
        branch => 'master';

maint/inc/Pod/Elemental/Transformer/TF_Sig.pm  view on Meta::CPAN

1
2
3
4
5
6
7
8
9
10
11
12
# ABSTRACT: TensorFlow signatures
 
use Moose;
extends 'https://metacpan.org/pod/Pod::Elemental::Transformer::List">Pod::Elemental::Transformer::List';
 
use feature qw{ postderef };
use lib 'lib';
use Types::Standard qw(Maybe Str Int ArrayRef CodeRef ScalarRef Ref);
use Types::Encodings qw(Bytes);

maint/inc/Pod/Elemental/Transformer/TF_Sig.pm  view on Meta::CPAN

69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
  unshift @replacements, $prefix if defined $prefix;
 
  @replacements;
};
 
sub __paras_for_num_marker { die "only support definition lists" }
sub __paras_for_bul_marker { die "only support definition lists" }
 
around __paras_for_def_marker => sub {
  my ($orig, $self, $rest) = @_;
 
  my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
  my $type_library = 'AI::TensorFlow::Libtensorflow::Lib::Types';
  my @types = ($rest);
  my $process_type = sub {
    my ($type) = @_;
    my $new_type_text = $type;
    my $info;
    if( eval { $info->{TT} = t($type); 1 }
      || eval { $info->{FFI} = $ffi->type_meta($type); 1 } ) {
      if( $info->{TT} && $info->{TT}->library eq $type_library ) {
        $new_type_text = "L<$type|$type_library/$type>";
      }
    } else {
      die "Could not find type constraint or FFI::Platypus type $type";
    }
 
    $new_type_text;
  };
 
  my $type_re = qr{
    \A (?<ws>\s*) (?<type> \w+)
  }xm;
  $rest =~ s[$type_re]{$+{ws} . $process_type->($+{type}) }ge;
 
  my @replacements = $orig->($self, $rest);
 
  @replacements;
};
 
1;

maint/process-notebook.pl  view on Meta::CPAN

67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
## Edit to NAME
perl -0777 -pi -e 's/(=head1 NAME\n+)$ENV{SRC_BASENAME}/\1$ENV{PODNAME}/' $DST
 
## Edit to local section link (Markdown::Pod does not yet recognise this).
perl -pi -E 's,\QL<CPANFILE|#CPANFILE>\E,L<CPANFILE|/CPANFILE>,g' $DST
 
## Add
##   =head1 CPANFILE
##
##     requires '...';
##     requires '...';
scan-perl-prereqs-nqlite --cpanfile $DST | perl -M5';print qq|=head1 CPANFILE\n\n|' -plE '$_ = q|  | . $_;' | sponge -a $DST ;
 
## Check output (if on TTY)
if [ -t 0 ]; then
        perldoc $DST;
fi
 
## Check and run script in the directory of the original (e.g., to get data
## files).
perl -c $DST

t/05_session_run.t  view on Meta::CPAN

25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
die "Can not init input op" unless $input_op;
 
use PDL;
my $p_data = float(
        -0.4809832, -0.3770838, 0.1743573, 0.7720509, -0.4064746, 0.0116595, 0.0051413, 0.9135732, 0.7197526, -0.0400658, 0.1180671, -0.6829428,
        -0.4810135, -0.3772099, 0.1745346, 0.7719303, -0.4066443, 0.0114614, 0.0051195, 0.9135003, 0.7196983, -0.0400035, 0.1178188, -0.6830465,
        -0.4809143, -0.3773398, 0.1746384, 0.7719052, -0.4067171, 0.0111654, 0.0054433, 0.9134697, 0.7192584, -0.0399981, 0.1177435, -0.6835230,
        -0.4808300, -0.3774327, 0.1748246, 0.7718700, -0.4070232, 0.0109549, 0.0059128, 0.9133330, 0.7188759, -0.0398740, 0.1181437, -0.6838635,
        -0.4807833, -0.3775733, 0.1748378, 0.7718275, -0.4073670, 0.0107582, 0.0062978, 0.9131795, 0.7187147, -0.0394935, 0.1184392, -0.6840039,
);
$p_data->reshape(1,5,12);
 
my $input_tensor = AI::TensorFlow::Libtensorflow::Tensor->New(
        FLOAT, [ $p_data->dims ], $p_data->get_dataref,
        sub { undef $p_data }
);
 
 
my $output_op = Output->New({
        oper => $graph->OperationByName( 'output_node0'),
        index => 0 } );

t/upstream/CAPI/003_Tensor.t  view on Meta::CPAN

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
        #
        # It should not be called in this case because aligned_alloc() is used.
        ok ! $deallocator_called, 'deallocator not called yet';
 
        is $t->Type, 'FLOAT', 'FLOAT TF_Tensor';
        is $t->NumDims, 2, '2D TF_Tensor';
        is $t->Dim(0), $dims[0], 'dim 0';
        is $t->Dim(1), $dims[1], 'dim 1';
        is $t->ByteSize, $num_bytes, 'bytes';
        is scalar_to_pointer(${$t->Data}), scalar_to_pointer($values),
                'data at same pointer address';
        undef $t;
        ok $deallocator_called, 'deallocated';
};
 
done_testing;

t/upstream/CAPI/018_ImportGraphDef.t  view on Meta::CPAN

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
ok $graph->OperationByName( 'scalar' ), 'got scalar operation from graph';
TF_Utils::Neg( $oper, $graph, $s );
TF_Utils::AssertStatusOK($s);
ok $graph->OperationByName( 'neg' ), 'got neg operation from graph';
 
note 'Export to a GraphDef.';
my $graph_def = AI::TensorFlow::Libtensorflow::Buffer->New;
$graph->ToGraphDef( $graph_def, $s );
TF_Utils::AssertStatusOK($s);
 
note 'Import it, with a prefix, in a fresh graph.';
undef $graph;
$graph = AI::TensorFlow::Libtensorflow::Graph->New;
my $opts = AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts->SetPrefix('imported');
$graph->ImportGraphDef($graph_def, $opts, $s);
TF_Utils::AssertStatusOK($s);
 
ok my $scalar = $graph->OperationByName('imported/scalar'), 'imported/scalar';
ok my $feed = $graph->OperationByName('imported/feed'), 'imported/feed';
ok my $neg = $graph->OperationByName('imported/neg'), 'imported/neg';

t/upstream/CAPI/018_ImportGraphDef.t  view on Meta::CPAN

84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
operation, into the same graph.|;
undef $opts;
$opts = AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts->SetPrefix('imported2');
$opts->AddInputMapping( 'scalar', 0, $TFOutput->coerce([$scalar=>0]));
$opts->AddReturnOutput('feed', 0);
$opts->AddReturnOutput('scalar', 0);
is $opts->NumReturnOutputs, 2, 'num return outputs';
$opts->AddReturnOperation('scalar');
is $opts->NumReturnOperations, 1, 'num return operations';
my $results = $graph->ImportGraphDefWithResults( $graph_def, $opts, $s );
TF_Utils::AssertStatusOK($s);
 
ok my $scalar2 = $graph->OperationByName("imported2/scalar"), "imported2/scalar";
ok my $feed2 = $graph->OperationByName("imported2/feed"), "imported2/feed";
ok my $neg2 = $graph->OperationByName("imported2/neg"), "imported2/neg";
 
note 'Check input mapping';
$neg_input = $neg->Input( $TFInput->coerce( [$neg => 0 ]) );
is $neg_input, object {
        call sub { shift->oper->Name } => $scalar->Name;
        call index => 0;
}, 'neg input';
 
note 'Check return outputs';
my $return_outputs = $results->ReturnOutputs;
is $return_outputs, array {
        item 0 => object {
                call sub { shift->oper->Name } => $feed2->Name;
                call index => 0;
        };
        item 1 => object {
                # remapped
                call sub { shift->oper->Name } => $scalar->Name;
                call index => 0;
        };
        end;
}, 'return outputs';
 
note 'Check return operation';
my $return_opers = $results->ReturnOperations;
is $return_opers, array {
        item 0 => object {
                # not remapped
                call Name => $scalar2->Name;
        };
        end;
}, 'return opers';
 
undef $results;
 
note 'Import again, with control dependencies, into the same graph.';
undef $opts;
$opts = AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts->SetPrefix("imported3");
$opts->AddControlDependency($feed);
$opts->AddControlDependency($feed2);
$graph->ImportGraphDef($graph_def, $opts, $s);
TF_Utils::AssertStatusOK($s);



( run in 0.621 second using v1.01-cache-2.11-cpan-95122f20152 )