view release on metacpan or search on metacpan
CONTRIBUTING view on Meta::CPAN
123456789101112131415161718192021222324252627282930313233343536373839404142<<<=== COPYRIGHT CONTRIBUTIONS ===>>>
[ BEGIN, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
By contributing to this repository, you agree that any and all such Contributions and derivative works thereof shall immediately become part of the APTech Family of software and documentation, and you
accept
and agree to the following legally-binding...
1. Definitions.
"You"
or
"Your"
shall mean the copyright owner, or legal entity authorized by the copyright owner, that is making this Agreement. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are und...
"APTech"
is
defined
as the Delaware corporation named Auto-Parallel Technologies, Inc.
with
a primary place of business in Cedar Park, Texas, USA.
The
"APTech Family of software and documentation"
(hereinafter the
"APTech Family"
) is
defined
as all copyrightable works identified as
"part of the APTech Family"
immediately following their copyright notice, and includes but is not limited to this ...
"Team APTech"
is
defined
as all duly-authorized contributors to the APTech Family, including You
after
making Your first Contribution to the APTech Family under the terms of this Agreement.
"Team APTech Leadership"
is
defined
as all duly-authorized administrators and official representatives of the APTech Family, as listed publicly on the most up-to-date copy of the AutoParallel.com website.
"Contribution"
shall mean any original work of authorship, including any changes or additions or enhancements to an existing work, that is intentionally submitted by You to this repository
for
inclusion in, or documentation of, any of the products or...
2. Assignment of Copyright. Subject to the terms and conditions of this Agreement, and
for
good and valuable consideration, receipt of which You acknowledge, You hereby transfer to the Delaware corporation named Auto-Parallel Technologies, Inc.
with
...
You hereby agree that
if
You have or acquire hereafter any patent or interface copyright or other intellectual property interest dominating the software or documentation contributed to by the Work (or
use
of that software or documentation), such domi...
You hereby represent and warrant that You are the sole copyright holder
for
the Work and that You have the right and power to enter into this legally-binding contractual agreement. You hereby indemnify and hold harmless APTech, its heirs, assignees,...
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to APTech and to recipients of software distributed by APTech a perpetual, worldwide, non-exclusive,
no
-charge, royalty-free, irrevocable (except as ...
4. You represent that you are legally entitled to assign the above copyright and grant the above patent license. If your employer(s) or contractee(s) have rights to intellectual property that you create that includes your Contributions, then you rep...
5. You represent that
each
of Your Contributions is Your original creation and is not subject to any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which ...
6. You agree to submit written notification to Team APTech's Leadership of any facts or circumstances of which you become aware that would make the representations of this Agreement inaccurate in any respect.
[ END, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
<<<=== LEGAL OVERVIEW ===>>>
All APTech Family software and documentation is legally copyrighted by Auto-Parallel Technologies, Inc.
To maintain the legal integrity and defensibility of the APTech Family of software and documentation, all contributors to the APTech Family must assign copyright ownership to Auto-Parallel Technologies, Inc. under the terms of the APTech Family Copyr...
CONTRIBUTING view on Meta::CPAN
45464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109Why The FSF Gets Copyright Assignments From Contributors
By Professor Eben Moglen, Columbia University Law School
Copyright © 2001, 2008, 2009, 2014 Free Software Foundation, Inc.
The quoted text below is not modified, and is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.
"Under US copyright law, which is the law under which most free software programs have historically been first published, there are very substantial procedural advantages to registration of copyright. And despite the broad right of distribution conv...
In order to make sure that all of
our
copyrights can meet the recordkeeping and other requirements of registration, and in order to be able to enforce the GPL most effectively, FSF requires that
each
author of code incorporated in FSF projects provid...
<<<=== COMMITMENT TO FREE & OPEN SOURCE SOFTWARE ===>>>
Auto-Parallel Technologies, Inc. is committed to maintaining the free-and-
open
-source software (FOSS) basis of the APTech Family.
If your APTech Family contribution is accepted and merged into an official APTech Family source repository, then your contribution is automatically published online
with
FOSS licensing, currently the Apache License Version 2.0.
<<<=== EMPLOYER COPYRIGHT DISCLAIMER AGREEMENT ===>>>
The file named EMPLOYERS.pdf contains the Employer Copyright Disclaimer Agreement. If you are employed or work as an independent contractor, and either your job involves computer programming or you have executed an agreement giving your employer or ...
<<<=== OTHER CONTRIBUTORS ===>>>
If anyone other than yourself
has
written software source code or documentation as part of your APTech Family contribution, then they must submit their contributions themselves under the terms of the APTech Family Copyright Assignment Agreement above...
Please be sure you DO NOT STUDY OR INCLUDE any 3rd-party or public-domain intellectual property as part of your APTech Family contribution, including but not limited to: source code; documentation; copyrighted, trademarked, or patented components; or...
<<<=== RECOGNITION ===>>>
Once we have received your contribution under the terms of the APTech Family Copyright Assignment Agreement above, as well as any necessary Employer Copyright Disclaimer Agreement(s), then we will begin the process of reviewing any software pull requ...
<<<=== SUBMISSION ===>>>
When you are ready to submit the signed agreement(s), please answer the following 12 questions about yourself and your APTech Family contribution, then include your answers in the body of your e-mail or on a separate sheet of paper in snail mail, and...
1. Full Legal Name
2. Preferred Pseudonym (or
"none"
)
3. Country of Citizenship
4. Date of Birth (spell full month name)
5. Snail Mail Address (include country)
6. E-Mail Address
7. Names of APTech Family Files Modified (or
"none"
)
8. Names of APTech Family Files Created (or
"none"
)
9. Current Employer(s) or Contractee(s) (or
"none"
)
10. Does Your Job Involve Computer Programming? (or
"not applicable"
)
11. Does Your Job Involve an IP Ownership Agreement? (or
"not applicable"
)
12. Name(s) & Employer(s) of Additional Contributors (or
"none"
)
Snail Mail Address:
Auto-Parallel Technologies, Inc.
[ CONTACT VIA E-MAIL BELOW FOR STREET ADDRESS ]
Cedar Park, TX, USA, 78613
E-Mail Address (Remove
"NOSPAM."
Before Sending):
william.braswell at NOSPAM.autoparallel.com
THANKS FOR CONTRIBUTING! :-)
123456789101112AI::TensorFlow::Libtensorflow is Copyright © 2022 Auto-Parallel Technologies, Inc.
All rights reserved.
AI::TensorFlow::Libtensorflow is part of the APTech Family of software and documentation.
This program is free software; you can redistribute it and/or modify
it under the terms of the Apache License Version 2.0.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
Apache License Version 2.0
for
more details.
123456789101112130.0.7 2023-10-05 01:27:42-0400
Features
- Add object detection demo. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/23>.
Refactoring
- Add timer to the notebooks to
time
the inference steps. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/17>.
Documentation
- Add information about installing GPU version of `libtensorflow` either on
19202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
Update the CI to additionally build the GPU Docker image. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/16>.
0.0.6 2023-01-30 15:22:04-0500
- Documentation
- Fix NAME
for
Notebook POD.
0.0.5 2023-01-30 11:46:31-0500
- Features
- Docker images
with
dependencies
for
notebooks.
- Support
for
running notebooks in Binder.
- Documentation
- Add manual
index
and quickstart guide.
- Add InferenceUsingTFHubEnformerGeneExprPredModel tutorial.
0.0.4 2022-12-21 15:57:53-0500
- Features
- Add Data::Printer and stringification support
for
several classes.
- Add `::TFLibrary` class. Move `GetAllOpList()` method there.
- Documentation
- Add InferenceUsingTFHubMobileNetV2Model tutorial.
0.0.3 2022-12-15 10:46:52-0500
- Features
- Add more testing of basic API. Complete port of
"(CAPI, *)"
tests
from upstream `tensorflow/c/c_api_test.cc`.
0.0.2 2022-11-28 14:33:33-0500
- Features
- Explicit support
for
minimum Perl v5.14.
0.0.1 2022-11-25 11:43:37-0500
Features
- First release.
171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970"Licensor"
shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity"
shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control
with
that entity. For the purposes of this definition,
"control"
means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You"
(or
"Your"
) shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source"
form shall mean the preferred form
for
making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object"
form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work"
shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works"
shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and
for
which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely
link
(or
bind
by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution"
shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor
for
inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition,
"submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor
for
the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as
"Not a Contribution."
"Contributor"
shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution
has
been received by Licensor and
subsequently incorporated within the Work.
142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186
with
Licensor regarding such Contributions.
names, trademarks, service marks, or product names of the Licensor,
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and
each
Contributor provides its Contributions) on an
"AS IS"
BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible
for
determining the
appropriateness of using or redistributing the Work and assume any
risks associated
with
Your exercise of permissions under this License.
8. Limitation of Liability. In
no
event and under
no
legal theory,
whether in tort (including negligence), contract, or otherwise,
unless
required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You
for
damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
Work (including but not limited to damages
for
loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even
if
such Contributor
has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee
for
, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent
with
this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only
if
You agree to indemnify,
defend, and hold
each
Contributor harmless
for
any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
195196197198199200201202203204205206207Copyright 2022 Auto-Parallel Technologies, Inc
Licensed under the Apache License, Version 2.0 (the
"License"
);
You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an
"AS IS"
BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License
for
the specific language governing permissions and
limitations under the License.
192021222324252627282930313233343536373839404142434445
"examples"
,
"inc"
,
"share"
,
"t"
,
"xt"
,
"maint"
]
},
"prereqs"
: {
"configure"
: {
"requires"
: {
"ExtUtils::MakeMaker"
:
"0"
,
"perl"
:
"5.014"
}
},
"develop"
: {
"requires"
: {
"Moose"
:
"0"
,
"Moose::Role"
:
"0"
,
"Pod::Simple::Search"
:
"0"
,
"Test::More"
:
"0.88"
,
"Test::Perl::Critic"
:
"0"
,
"Test::Pod::LinkCheck::Lite"
:
"0"
,
"Test::Pod::Snippets"
:
"0"
,
"Test::Pod::Snippets::Parser"
:
"0"
,
"With::Roles"
:
"0"
},
535455565758596061626364656667686970717273
"Module::Runtime"
:
"0"
,
"Mu"
:
"0"
,
"Path::Tiny"
:
"0"
,
"Sort::Key::Multi"
:
"0"
,
"Sub::Uplevel"
:
"0"
,
"Syntax::Construct"
:
"0"
,
"Types::Path::Tiny"
:
"0"
}
},
"runtime"
: {
"requires"
: {
"Alien::Libtensorflow"
:
"0"
,
"Class::Tiny"
:
"0"
,
"Const::Exporter"
:
"0"
,
"Const::Fast"
:
"0"
,
"Devel::StrictMode"
:
"0"
,
"Exporter::Tiny"
:
"0"
,
"FFI::C"
:
"0.12"
,
"FFI::C::ArrayDef"
:
"0"
,
"FFI::C::StructDef"
:
"0"
,
"FFI::CheckLib"
:
"0.28"
,
979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
"perl"
:
"5.014"
,
"strict"
:
"0"
,
"warnings"
:
"0"
},
"suggests"
: {
"Data::Printer"
:
"0"
,
"PDL"
:
"0"
}
},
"test"
: {
"requires"
: {
"Data::Dumper"
:
"0"
,
"PDL"
:
"0"
,
"PDL::Core"
:
"0"
,
"Path::Tiny"
:
"0"
,
"Test2::V0"
:
"0"
,
"Test::More"
:
"0"
,
"aliased"
:
"0"
,
"lib"
:
"0"
,
"perl"
:
"5.014"
}
}
},
"release_status"
:
"stable"
,
"resources"
: {
"repository"
: {
"type"
:
"git"
,
}
},
"version"
:
"0.0.7"
,
"x_generated_by_perl"
:
"v5.26.1"
,
"x_serialization_backend"
:
"Cpanel::JSON::XS version 4.37"
,
"x_spdx_expression"
:
"Apache-2.0"
}
1234567891011121314151617181920212223242526272829303132333435363738394041424344---
abstract:
'Bindings for Libtensorflow deep learning library'
author:
-
'Zakariyya Mughal <zmughal@cpan.org>'
build_requires:
Data::Dumper:
'0'
PDL:
'0'
PDL::Core:
'0'
Path::Tiny:
'0'
Test2::V0:
'0'
Test::More:
'0'
aliased:
'0'
lib:
'0'
perl:
'5.014'
configure_requires:
ExtUtils::MakeMaker:
'0'
perl:
'5.014'
dynamic_config: 0
generated_by:
'Dist::Zilla version 6.030, CPAN::Meta::Converter version 2.150010'
license: apache
meta-spec:
version:
'1.4'
name: AI-TensorFlow-Libtensorflow
no_index:
directory:
- eg
- examples
- inc
- share
- t
- xt
- maint
requires:
Alien::Libtensorflow:
'0'
Class::Tiny:
'0'
Const::Exporter:
'0'
Const::Fast:
'0'
Devel::StrictMode:
'0'
Exporter::Tiny:
'0'
FFI::C:
'0.12'
FFI::C::ArrayDef:
'0'
FFI::C::StructDef:
'0'
FFI::CheckLib:
'0.28'
6162636465666768697071727374757677
Types::Common:
'0'
Types::Standard:
'0'
base:
'0'
constant:
'0'
feature:
'0'
namespace::autoclean:
'0'
overload:
'0'
perl:
'5.014'
strict:
'0'
warnings:
'0'
resources:
version: 0.0.7
x_generated_by_perl: v5.26.1
x_serialization_backend:
'YAML::Tiny version 1.74'
x_spdx_expression: Apache-2.0
434445464748495051525354555657585960616263;; For xt/author/pod-linkcheck.t
; authordep Test::Pod::LinkCheck::Lite
;; For xt/author/pod-snippets.t
; authordep Test::Pod::Snippets
; authordep Pod::Simple::Search
; authordep With::Roles
[Test::Perl::Critic]
; authordep Perl::Critic::Community
[Prereqs / RuntimeRequires]
; Needs Perl v5.14
for
Feature::Compat::Defer
perl = 5.014
FFI::Platypus = 2.00
FFI::C = 0.12
FFI::CheckLib = 0
FFI::Platypus::Type::Enum = 0
FFI::Platypus::Type::PtrObject = 0
[Prereqs / RuntimeSuggests]
PDL = 0
lib/AI/TensorFlow/Libtensorflow/ApiDefMap.pm view on Meta::CPAN
6789101112131415161718192021222324252627use
namespace::autoclean;
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
$ffi
->attach( [
'NewApiDefMap'
=>
'New'
] => [
arg
'TF_Buffer'
=>
'op_list_buffer'
,
arg
'TF_Status'
=>
'status'
,
] =>
'TF_ApiDefMap'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
$xs
->(
@rest
);
});
$ffi
->attach( [
'DeleteApiDefMap'
=>
'DESTROY'
] => [
arg
'TF_ApiDefMap'
=>
'apimap'
] =>
'void'
);
$ffi
->attach( [
'ApiDefMapPut'
=>
'Put'
] => [
arg
'TF_ApiDefMap'
=>
'api_def_map'
,
arg
'tf_text_buffer'
=> [
qw(text text_len)
],
arg
'TF_Status'
=>
'status'
,
lib/AI/TensorFlow/Libtensorflow/Buffer.pm view on Meta::CPAN
44454647484950515253545556575859606162636465
my
$opaque
=
$ffi
->cast(
'data_deallocator_t'
,
'opaque'
,
$closure
);
$self
->_data_deallocator(
$opaque
);
}
$ffi
->attach( [
'NewBuffer'
=>
'New'
] => [] =>
'TF_Buffer'
);
$ffi
->attach( [
'NewBufferFromString'
=>
'NewFromString'
] => [
arg
'tf_buffer_buffer'
=> [
qw(proto proto_len)
]
] =>
'TF_Buffer'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
$xs
->(
@rest
);
});
$ffi
->attach( [
'DeleteBuffer'
=>
'DESTROY'
] => [
'TF_Buffer'
],
'void'
);
1;
__END__
=pod
lib/AI/TensorFlow/Libtensorflow/Buffer.pm view on Meta::CPAN
6970717273747576777879808182838485868788=head1 NAME
AI::TensorFlow::Libtensorflow::Buffer - Buffer that holds pointer to data with length
=head1 SYNOPSIS
use aliased 'AI::TensorFlow::Libtensorflow::Buffer' => 'Buffer';
=head1 DESCRIPTION
C<TFBuffer> is a data structure that stores a pointer to a block of data, the
length of the data, and optionally a deallocator function for memory
management.
This structure is typically used in C<libtensorflow> to store the data for a
serialized protocol buffer.
=head1 CONSTRUCTORS
=head2 New
lib/AI/TensorFlow/Libtensorflow/DataType.pm view on Meta::CPAN
118119120121122123124125126127128129130131132133134135136137
my
$dtype
= FLOAT;
is FLOAT->Size, 4,
'FLOAT is 4 bytes large'
;
is max(
map
{
$_
->Size }
@DTYPES
), 16,
'Largest type has sizeof() == 16 bytes'
;
=head1 DESCRIPTION
Enum representing native data types used inside of containers such as
L<TFTensor|AI::TensorFlow::Libtensorflow::Lib::Types/TFTensor>.
=head1 CONSTANTS
=head2 STRING
String.
=head2 BOOL
lib/AI/TensorFlow/Libtensorflow/DataType.pm view on Meta::CPAN
209210211212213214215216217218219220221222223224225226227228229=head2 QUINT8
8-bit quantized unsigned integer.
=head2 QUINT16
16-bit quantized unsigned integer.
=head2 RESOURCE
Handle to a mutable resource.
=head2 VARIANT
Variant.
=head1 METHODS
=head2 Size
my $size = $dtype->Size();
lib/AI/TensorFlow/Libtensorflow/Eager/Context.pm view on Meta::CPAN
456789101112131415161718192021222324use
strict;
use
warnings;
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
$ffi
->attach( [
'NewContext'
=>
'New'
] => [
arg
TFE_ContextOptions
=>
'opts'
,
arg
TF_Status
=>
'status'
] =>
'TFE_Context'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
$xs
->(
@rest
);
} );
__END__
=pod
=encoding UTF-8
=head1 NAME
lib/AI/TensorFlow/Libtensorflow/Graph.pm view on Meta::CPAN
1234567891011package
AI::TensorFlow::Libtensorflow::Graph;
# ABSTRACT: A TensorFlow computation, represented as a dataflow graph
$AI::TensorFlow::Libtensorflow::Graph::VERSION
=
'0.0.7'
;
use
strict;
use
warnings;
use
namespace::autoclean;
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
lib/AI/TensorFlow/Libtensorflow/Graph.pm view on Meta::CPAN
58596061626364656667686970717273747576777879
arg
'tf_dims_buffer'
=> [
qw(dims num_dims)
],
arg
'TF_Status'
=>
'status'
,
] =>
'void'
);
$ffi
->attach( [
'GraphGetTensorShape'
=>
'GetTensorShape'
] => [
arg
'TF_Graph'
=>
'graph'
,
arg
'TF_Output'
=>
'output'
,
arg
'tf_dims_buffer'
=> [
qw(dims num_dims)
],
arg
'TF_Status'
=>
'status'
,
] =>
'void'
=>
sub
{
my
(
$xs
,
@rest
) =
@_
;
my
(
$graph
,
$output
,
$status
) =
@rest
;
my
$dims
= [ (0)x(
$graph
->GetTensorNumDims(
$output
,
$status
)) ];
$xs
->(
$graph
,
$output
,
$dims
,
$status
);
return
$dims
;
});
$ffi
->attach( [
'GraphGetTensorNumDims'
=>
'GetTensorNumDims'
] => [
arg
'TF_Graph'
=>
'graph'
,
arg
'TF_Output'
=>
'output'
,
arg
'TF_Status'
=>
'status'
,
] =>
'int'
);
lib/AI/TensorFlow/Libtensorflow/Graph.pm view on Meta::CPAN
1061071081091101111121131141151161171181191201211221231241251261;
__END__
=pod
=encoding UTF-8
=head1 NAME
AI::TensorFlow::Libtensorflow::Graph - A TensorFlow computation, represented as a dataflow graph
=head1 SYNOPSIS
use aliased 'AI::TensorFlow::Libtensorflow::Graph' => 'Graph';
=head1 DESCRIPTION
=head1 CONSTRUCTORS
=head2 New
lib/AI/TensorFlow/Libtensorflow/ImportGraphDefResults.pm view on Meta::CPAN
56789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475use
warnings;
use
namespace::autoclean;
use
List::Util ();
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
$ffi
->attach( [
'DeleteImportGraphDefResults'
=>
'DESTROY'
] => [
arg
TF_ImportGraphDefResults
=>
'results'
,
] =>
'void'
);
$ffi
->attach( [
'ImportGraphDefResultsReturnOutputs'
=>
'ReturnOutputs'
] => [
arg
TF_ImportGraphDefResults
=>
'results'
,
arg
'int*'
=>
'num_outputs'
,
arg
'opaque*'
=> {
id
=>
'outputs'
,
type
=>
'TF_Output_struct_array*'
},
] =>
'void'
=>
sub
{
my
(
$xs
,
$results
) =
@_
;
my
$num_outputs
;
my
$outputs_array
=
undef
;
$xs
->(
$results
, \
$num_outputs
, \
$outputs_array
);
return
[]
if
$num_outputs
== 0;
my
$sizeof_output
=
$ffi
->sizeof(
'TF_Output'
);
window(
my
$outputs_packed
,
$outputs_array
,
$sizeof_output
*
$num_outputs
);
# due to unpack, these are copies (no longer owned by $results)
my
@outputs
=
map
bless
(\
$_
,
"AI::TensorFlow::Libtensorflow::Output"
),
unpack
"(a${sizeof_output})*"
,
$outputs_packed
;
return
\
@outputs
;
});
$ffi
->attach( [
'ImportGraphDefResultsReturnOperations'
=>
'ReturnOperations'
] => [
arg
TF_ImportGraphDefResults
=>
'results'
,
arg
'int*'
=>
'num_opers'
,
arg
'opaque*'
=> {
id
=>
'opers'
,
type
=>
'TF_Operation_array*'
},
] =>
'void'
=>
sub
{
my
(
$xs
,
$results
) =
@_
;
my
$num_opers
;
my
$opers_array
=
undef
;
$xs
->(
$results
, \
$num_opers
, \
$opers_array
);
return
[]
if
$num_opers
== 0;
my
$opers_array_base_packed
= buffer_to_scalar(
$opers_array
,
$ffi
->sizeof(
'opaque'
) *
$num_opers
);
my
@opers
=
map
{
$ffi
->cast(
'opaque'
,
'TF_Operation'
,
$_
)
}
unpack
"(@{[ AI::TensorFlow::Libtensorflow::Lib::_pointer_incantation ]})*"
,
$opers_array_base_packed
;
return
\
@opers
;
} );
$ffi
->attach( [
'ImportGraphDefResultsMissingUnusedInputMappings'
=>
'MissingUnusedInputMappings'
] => [
arg
TF_ImportGraphDefResults
=>
'results'
,
arg
'int*'
=>
'num_missing_unused_input_mappings'
,
arg
'opaque*'
=> {
id
=>
'src_names'
,
ctype
=>
'const char***'
},
arg
'opaque*'
=> {
id
=>
'src_indexes'
,
ctype
=>
'int**'
},
] =>
'void'
=>
sub
{
my
(
$xs
,
$results
) =
@_
;
my
$num_missing_unused_input_mappings
;
my
$src_names
;
my
$src_indexes
;
$xs
->(
$results
,
\
$num_missing_unused_input_mappings
,
\
$src_names
, \
$src_indexes
);
my
$src_names_str
=
$ffi
->cast(
'opaque'
,
"string[$num_missing_unused_input_mappings]"
,
$src_names
);
my
$src_indexes_int
=
$ffi
->cast(
'opaque'
,
"int[$num_missing_unused_input_mappings]"
,
$src_indexes
);
return
[ List::Util::zip(
$src_names_str
,
$src_indexes_int
) ];
});
lib/AI/TensorFlow/Libtensorflow/Manual.pod view on Meta::CPAN
1819202122232425262728293031323334353637=item L<AI::TensorFlow::Libtensorflow::Manual::Quickstart>
Start here to get an overview of the library.
=item L<AI::TensorFlow::Libtensorflow::Manual::GPU>
GPU-specific installation and usage information.
=item L<AI::TensorFlow::Libtensorflow::Manual::CAPI>
Appendix of all C API functions with their signatures. These are linked from
the documentation of individual methods.
=back
=head1 AUTHOR
Zakariyya Mughal <zmughal@cpan.org>
=head1 COPYRIGHT AND LICENSE
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
131132133134135136137138139140141142143144145146147148149150151=head2 TF_GraphSetTensorShape
=over 2
Sets the shape of the Tensor referenced by `output` in `graph` to
the shape described by `dims` and `num_dims`.
If the number of dimensions is unknown, `num_dims` must be set to
-1 and `dims` can be null. If a dimension is unknown, the
corresponding entry in the `dims` array must be -1.
This does not overwrite the existing shape associated with `output`,
but merges the input shape with the existing shape. For example,
setting a shape of [-1, 2] with an existing shape [2, -1] would set
a final shape of [2, 2] based on shape merging semantics.
Returns an error into `status` if:
* `output` is not in `graph`.
* An invalid shape is being set (e.g., the shape being set
is incompatible with the existing shape).
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
181182183184185186187188189190191192193194195196197198199200201202=head2 TF_GraphGetTensorShape
=over 2
Returns the shape of the Tensor referenced by `output` in `graph`
into `dims`. `dims` must be an array large enough to hold `num_dims`
entries (e.g., the return value of TF_GraphGetTensorNumDims).
If the number of dimensions in the shape is unknown or the shape is
a scalar, `dims` will remain untouched. Otherwise, each element of
`dims` will be set corresponding to the size of the dimension. An
unknown dimension is represented by `-1`.
Returns an error into `status` if:
* `output` is not in `graph`.
* `num_dims` does not match the actual number of dimensions.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_GraphGetTensorShape(TF_Graph* graph,
TF_Output output,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrFuncName(TF_OperationDescription* desc,
const char* attr_name,
const char* value, size_t
length
);
=head2 TF_SetAttrShape
=over 2
Set `num_dims` to -1 to represent "unknown rank". Otherwise,
`dims` points to an array of length `num_dims`. `dims[i]` must be
>= -1, with -1 meaning "unknown dimension".
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrShape(TF_OperationDescription* desc,
const char* attr_name,
const int64_t* dims, int num_dims);
=head2 TF_SetAttrShapeList
=over 2
`dims` and `num_dims` must point to arrays of length `num_shapes`.
Set `num_dims[i]` to -1 to represent "unknown rank". Otherwise,
`dims[i]` points to an array of length `num_dims[i]`. `dims[i][j]`
must be >= -1, with -1 meaning "unknown dimension".
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrShapeList(TF_OperationDescription* desc,
const char* attr_name,
const int64_t* const* dims,
const int* num_dims,
int num_shapes);
=head2 TF_SetAttrTensorShapeProto
=over 2
`proto` must point to an array of `proto_len` bytes representing a
binary-serialized TensorShapeProto.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProto(
TF_OperationDescription* desc, const char* attr_name, const void* proto,
size_t proto_len, TF_Status* status);
=head2 TF_SetAttrTensorShapeProtoList
=over 2
`protos` and `proto_lens` must point to arrays of length `num_shapes`.
`protos[i]` must point to an array of `proto_lens[i]` bytes
representing a binary-serialized TensorShapeProto.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProtoList(
TF_OperationDescription* desc, const char* attr_name,
const void* const* protos, const size_t* proto_lens, int num_shapes,
TF_Status* status);
=head2 TF_SetAttrTensor
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
531532533534535536537538539540541542543544545546547548549550551
const char* attr_name,
TF_Tensor* const*
values
,
int
num_values,
TF_Status* status);
=head2 TF_SetAttrValueProto
=over 2
`proto` should point to a sequence of bytes of length `proto_len`
representing a binary serialization of an AttrValue protocol
buffer.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SetAttrValueProto(TF_OperationDescription* desc,
const char* attr_name,
const void* proto,
size_t proto_len,
TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
825826827828829830831832833834835836837838839840841842843844845
TF_Status* status);
=head2 TF_OperationGetAttrStringList
=over 2
Get the list of strings in the value of the attribute `attr_name`. Fills in
`values` and `lengths`, each of which must point to an array of length at
least `max_values`.
The elements of values will point to addresses in `storage` which must be at
least `storage_size` bytes in length. Ideally, max_values would be set to
TF_AttrMetadata.list_size and `storage` would be at least
TF_AttrMetadata.total_size, obtained from TF_OperationGetAttrMetadata(oper,
attr_name).
Fails if storage_size is too small to hold the requested number of strings.
=back
/* From <tensorflow/c/c_api.h> */
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
98298398498598698798898999099199299399499599699799899910001001100210031004100510061007
const char* attr_name,
int64_t* value,
int
num_dims,
TF_Status* status);
=head2 TF_OperationGetAttrShapeList
=over 2
Fills in `dims` with the list of shapes in the attribute `attr_name` of
`oper` and `num_dims` with the corresponding number of dimensions. On return,
for every i where `num_dims[i]` > 0, `dims[i]` will be an array of
`num_dims[i]` elements. A value of -1 for `num_dims[i]` indicates that the
i-th shape in the list is unknown.
The elements of `dims` will point to addresses in `storage` which must be
large enough to hold at least `storage_size` int64_ts. Ideally, `num_shapes`
would be set to TF_AttrMetadata.list_size and `storage_size` would be set to
TF_AttrMetadata.total_size from TF_OperationGetAttrMetadata(oper,
attr_name).
Fails if storage_size is insufficient to hold the requested shapes.
=back
/* From <tensorflow/c/c_api.h> */
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
10751076107710781079108010811082108310841085108610871088108910901091109210931094
const char* attr_name,
TF_Tensor**
values
,
int
max_values,
TF_Status* status);
=head2 TF_OperationGetAttrValueProto
=over 2
Sets `output_attr_value` to the binary-serialized AttrValue proto
representation of the value of the `attr_name` attr of `oper`.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_OperationGetAttrValueProto(
TF_Operation* oper, const char* attr_name, TF_Buffer* output_attr_value,
TF_Status* status);
=head2 TF_OperationGetNumAttrs
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
115611571158115911601161116211631164116511661167116811691170117111721173117411751176=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern TF_Operation* TF_GraphNextOperation(TF_Graph* graph,
size_t* pos);
=head2 TF_GraphToGraphDef
=over 2
Write out a serialized representation of `graph` (as a GraphDef protocol
message) to `output_graph_def` (allocated by TF_NewBuffer()).
`output_graph_def`'s underlying buffer will be freed when TF_DeleteBuffer()
is called.
May fail on very large graphs in the future.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_GraphToGraphDef(TF_Graph* graph,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
13261327132813291330133113321333133413351336133713381339134013411342134313441345
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddControlDependency(
TF_ImportGraphDefOptions* opts, TF_Operation* oper);
=head2 TF_ImportGraphDefOptionsAddReturnOutput
=over 2
Add an output in `graph_def` to be returned via the `return_outputs` output
parameter of TF_GraphImportGraphDef(). If the output is remapped via an input
mapping, the corresponding existing tensor in `graph` will be returned.
`oper_name` is copied and has no lifetime requirements.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddReturnOutput(
TF_ImportGraphDefOptions* opts, const char* oper_name, int index);
=head2 TF_ImportGraphDefOptionsNumReturnOutputs
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
13821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455
TF_CAPI_EXPORT extern
int
TF_ImportGraphDefOptionsNumReturnOperations(
const TF_ImportGraphDefOptions* opts);
=head2 TF_ImportGraphDefResultsReturnOutputs
=over 2
Fetches the return outputs requested via
TF_ImportGraphDefOptionsAddReturnOutput(). The number of fetched outputs is
returned in `num_outputs`. The array of return outputs is returned in
`outputs`. `*outputs` is owned by and has the lifetime of `results`.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOutputs(
TF_ImportGraphDefResults* results, int* num_outputs, TF_Output** outputs);
=head2 TF_ImportGraphDefResultsReturnOperations
=over 2
Fetches the return operations requested via
TF_ImportGraphDefOptionsAddReturnOperation(). The number of fetched
operations is returned in `num_opers`. The array of return operations is
returned in `opers`. `*opers` is owned by and has the lifetime of `results`.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOperations(
TF_ImportGraphDefResults* results, int* num_opers, TF_Operation*** opers);
=head2 TF_ImportGraphDefResultsMissingUnusedInputMappings
=over 2
Fetches any input mappings requested via
TF_ImportGraphDefOptionsAddInputMapping() that didn't appear in the GraphDef
and weren't used as input to any node in the imported graph def. The number
of fetched mappings is returned in `num_missing_unused_input_mappings`. The
array of each mapping's source node name is returned in `src_names`, and the
array of each mapping's source index is returned in `src_indexes`.
`*src_names`, `*src_indexes`, and the memory backing each string in
`src_names` are owned by and have the lifetime of `results`.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsMissingUnusedInputMappings(
TF_ImportGraphDefResults* results, int* num_missing_unused_input_mappings,
const char*** src_names, int** src_indexes);
=head2 TF_DeleteImportGraphDefResults
=over 2
Deletes a results object returned by TF_GraphImportGraphDefWithResults().
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_DeleteImportGraphDefResults(
TF_ImportGraphDefResults* results);
=head2 TF_GraphImportGraphDefWithResults
=over 2
Import the graph serialized in `graph_def` into `graph`. Returns nullptr and
a bad status on error. Otherwise, returns a populated
TF_ImportGraphDefResults instance. The returned instance must be deleted via
TF_DeleteImportGraphDefResults().
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
1463146414651466146714681469147014711472147314741475147614771478147914801481148214831484148514861487148814891490149114921493149414951496149714981499
TF_Status* status);
=head2 TF_GraphImportGraphDefWithReturnOutputs
=over 2
Import the graph serialized in `graph_def` into `graph`.
Convenience function for when only return outputs are needed.
`num_return_outputs` must be the number of return outputs added (i.e. the
result of TF_ImportGraphDefOptionsNumReturnOutputs()). If
`num_return_outputs` is non-zero, `return_outputs` must be of length
`num_return_outputs`. Otherwise it can be null.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_GraphImportGraphDefWithReturnOutputs(
TF_Graph* graph, const TF_Buffer* graph_def,
const TF_ImportGraphDefOptions* options, TF_Output* return_outputs,
int num_return_outputs, TF_Status* status);
=head2 TF_GraphImportGraphDef
=over 2
Import the graph serialized in `graph_def` into `graph`.
Convenience function for when no results are needed.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_GraphImportGraphDef(
TF_Graph* graph, const TF_Buffer* graph_def,
const TF_ImportGraphDefOptions* options, TF_Status* status);
=head2 TF_GraphCopyFunction
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
153815391540154115421543154415451546154715481549155015511552155315541555155615571558
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern
int
TF_GraphNumFunctions(TF_Graph* g);
=head2 TF_GraphGetFunctions
=over 2
Fills in `funcs` with the TF_Function* registered in `g`.
`funcs` must point to an array of TF_Function* of length at least
`max_func`. In usual usage, max_func should be set to the result of
TF_GraphNumFunctions(g). In this case, all the functions registered in
`g` will be returned. Else, an unspecified subset.
If successful, returns the number of TF_Function* successfully set in
`funcs` and sets status to OK. The caller takes ownership of
all the returned TF_Functions. They must be deleted with TF_DeleteFunction.
On error, returns 0, sets status to the encountered error, and the contents
of funcs will be undefined.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
1621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_FinishWhile(const TF_WhileParams* params,
TF_Status* status,
TF_Output* outputs);
=head2 TF_AbortWhile
=over 2
Frees `params`s resources without building a while loop. `params` is no
longer valid after this returns. Either this or TF_FinishWhile() must be
called after a successful TF_NewWhile() call.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_AbortWhile(const TF_WhileParams* params);
=head2 TF_AddGradients
=over 2
Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
`dx` are used as initial gradients (which represent the symbolic partial
derivatives of some loss function `L` w.r.t. `y`).
`dx` must be nullptr or have size `ny`.
If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
shapes in `y`.
The partial derivatives are returned in `dy`. `dy` should be allocated to
size `nx`.
Gradient nodes are automatically named under the "gradients/" prefix. To
guarantee name uniqueness, subsequent calls to the same graph will
append an incremental tag to the prefix: "gradients_1/", "gradients_2/", ...
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
167216731674167516761677167816791680168116821683168416851686168716881689169016911692=head2 TF_AddGradientsWithPrefix
=over 2
Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
This is a variant of TF_AddGradients that allows to caller to pass a custom
name prefix to the operations added to a graph to compute the gradients.
`dx` are used as initial gradients (which represent the symbolic partial
derivatives of some loss function `L` w.r.t. `y`).
`dx` must be nullptr or have size `ny`.
If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
shapes in `y`.
The partial derivatives are returned in `dy`. `dy` should be allocated to
size `nx`.
`prefix` names the scope into which all gradients operations are being added.
`prefix` must be unique within the provided graph otherwise this operation
will fail. If `prefix` is nullptr, the default prefixing behaviour takes
place, see TF_AddGradients for more details.
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
1729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780
array of operations is necessary to distinguish the case of
creating a function
with
no
body (e.g. identity or permutation)
and the case of creating a function whose body contains all
the nodes in the graph (except
for
the automatic skipping, see
below).
opers - Array of operations to become the body of the function or null.
- If
no
array is
given
(`num_opers` = -1), all the
operations in `fn_body` will become part of the function
except operations referenced in `inputs`. These operations
must have a single output (these operations are typically
placeholders created
for
the sole purpose of representing
an input. We can relax this constraint
if
there are
- If an array is
given
(`num_opers` >= 0), all operations
in it will become part of the function. In particular,
no
automatic skipping of dummy input operations is performed.
ninputs - number of elements in `inputs` array
inputs - array of TF_Outputs that specify the inputs to the function.
If `ninputs` is zero (the function takes
no
inputs), `inputs`
can be null. The names used
for
function inputs are normalized
names of the operations (usually placeholders) pointed to by
`inputs`. These operation names should start
with
a letter.
Normalization will convert all letters to lowercase and
non-alphanumeric characters to
'_'
to make resulting names match
the
"[a-z][a-z0-9_]*"
pattern
for
operation argument names.
`inputs` cannot contain the same tensor twice.
noutputs - number of elements in `outputs` array
outputs - array of TF_Outputs that specify the outputs of the function.
If `noutputs` is zero (the function returns
no
outputs), `outputs`
can be null. `outputs` can contain the same tensor more than once.
output_names - The names of the function's outputs. `output_names` array
must either have the same
length
as `outputs`
(i.e. `noutputs`) or be null. In the former case,
the names should match the regular expression
for
ArgDef
names -
"[a-z][a-z0-9_]*"
. In the latter case,
names
for
outputs will be generated automatically.
opts - various options
for
the function, e.g. XLA's inlining control.
description - optional human-readable description of this function.
status - Set to OK on success and an appropriate error on failure.
Note that
when
the same TF_Output is listed as both an input and an output,
the corresponding function's output will equal to this input,
instead of the original node's output.
Callers must also satisfy the following constraints:
- `inputs` cannot refer to TF_Outputs within a control flow context. For
- `inputs` and `outputs` cannot have reference types. Reference types are
not exposed through C API and are being replaced
with
Resources. We support
reference types inside function's body to support legacy code. Do not
- Every node in the function's body must have all of its inputs (including
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
18421843184418451846184718481849185018511852185318541855185618571858185918601861186218631864186518661867186818691870187118721873187418751876187718781879=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern const char* TF_FunctionName(TF_Function* func);
=head2 TF_FunctionToFunctionDef
=over 2
Write out a serialized representation of `func` (as a FunctionDef protocol
message) to `output_func_def` (allocated by TF_NewBuffer()).
`output_func_def`'s underlying buffer will be freed when TF_DeleteBuffer()
is called.
May fail on very large graphs in the future.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_FunctionToFunctionDef(TF_Function* func,
TF_Buffer* output_func_def,
TF_Status* status);
=head2 TF_FunctionImportFunctionDef
=over 2
Construct and return the function whose FunctionDef representation is
serialized in `proto`. `proto_len` must equal the number of bytes
pointed to by `proto`.
Returns:
On success, a newly created TF_Function instance. It must be deleted by
calling TF_DeleteFunction.
On failure, null.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
18821883188418851886188718881889189018911892189318941895189618971898189919001901190219031904190519061907190819091910191119121913191419151916191719181919
TF_CAPI_EXPORT extern TF_Function* TF_FunctionImportFunctionDef(
const void* proto, size_t proto_len, TF_Status* status);
=head2 TF_FunctionSetAttrValueProto
=over 2
Sets function attribute named `attr_name` to value stored in `proto`.
If this attribute is already set to another value, it is overridden.
`proto` should point to a sequence of bytes of length `proto_len`
representing a binary serialization of an AttrValue protocol
buffer.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_FunctionSetAttrValueProto(TF_Function* func,
const char* attr_name,
const void* proto,
size_t proto_len,
TF_Status* status);
=head2 TF_FunctionGetAttrValueProto
=over 2
Sets `output_attr_value` to the binary-serialized AttrValue proto
representation of the value of the `attr_name` attr of `func`.
If `attr_name` attribute is not present, status is set to an error.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_FunctionGetAttrValueProto(
TF_Function* func, const char* attr_name, TF_Buffer* output_attr_value,
TF_Status* status);
=head2 TF_DeleteFunction
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
193119321933193419351936193719381939194019411942194319441945194619471948194919501951195219531954195519561957195819591960=head2 TF_TryEvaluateConstant
=over 2
Attempts to evaluate `output`. This will only be possible if `output` doesn't
depend on any graph inputs (this function is safe to call if this isn't the
case though).
If the evaluation is successful, this function returns true and `output`s
value is returned in `result`. Otherwise returns false. An error status is
returned if something is wrong with the graph or input. Note that this may
return false even if no error status is set.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern unsigned char TF_TryEvaluateConstant(TF_Graph* graph,
TF_Output output,
TF_Tensor** result,
TF_Status* status);
=head2 TF_NewSession
=over 2
Return a new execution session with the associated graph, or NULL on
error. Does not take ownership of any input parameters.
*`graph` must be a valid graph (not deleted or nullptr). `graph` will be
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
196619671968196919701971197219731974197519761977197819791980198119821983198419851986
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern TF_Session* TF_NewSession(TF_Graph* graph,
const TF_SessionOptions* opts,
TF_Status* status);
=head2 TF_LoadSessionFromSavedModel
=over 2
This function creates a new TF_Session (which is created on success) using
`session_options`, and then initializes state (restoring tensors and other
assets) using `run_options`.
Any NULL and non-NULL value combinations for (`run_options, `meta_graph_def`)
are valid.
- `export_dir` must be set to the path of the exported SavedModel.
- `tags` must include the set of tags used to identify one MetaGraphDef in
the SavedModel.
- `graph` must be a graph newly allocated with TF_NewGraph().
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
20092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_CloseSession(TF_Session*, TF_Status* status);
=head2 TF_DeleteSession
=over 2
Destroy a session object.
Even if error information is recorded in *status, this call discards all
local resources associated with the session. The session may not be used
during or after this call (and the session drops its reference to the
corresponding graph).
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_DeleteSession(TF_Session*, TF_Status* status);
=head2 TF_SessionRun
=over 2
Run the graph associated with the session starting with the supplied inputs
(inputs[0,ninputs-1] with corresponding values in input_values[0,ninputs-1]).
Any NULL and non-NULL value combinations for (`run_options`,
`run_metadata`) are valid.
- `run_options` may be NULL, in which case it will be ignored; or
non-NULL, in which case it must point to a `TF_Buffer` containing the
serialized representation of a `RunOptions` protocol buffer.
- `run_metadata` may be NULL, in which case it will be ignored; or
non-NULL, in which case it must point to an empty, freshly allocated
`TF_Buffer` that may be updated to contain the serialized representation
of a `RunMetadata` protocol buffer.
The caller retains ownership of `input_values` (which can be deleted using
TF_DeleteTensor). The caller also retains ownership of `run_options` and/or
`run_metadata` (when not NULL) and should manually call TF_DeleteBuffer on
them.
On success, the tensors corresponding to outputs[0,noutputs-1] are placed in
output_values[]. Ownership of the elements of output_values[] is transferred
to the caller, which must eventually call TF_DeleteTensor on them.
On failure, output_values[] contains NULLs.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_SessionRun(
TF_Session* session,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
23592360236123622363236423652366236723682369237023712372237323742375237623772378
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern uint64_t TF_DeviceListIncarnation(
const TF_DeviceList* list,
int
index
, TF_Status* status);
=head2 TF_LoadLibrary
=over 2
Load the library specified by library_filename and register the ops and
kernels present in that library.
Pass "library_filename" to a platform-specific mechanism for dynamically
loading a library. The rules for determining the exact location of the
library are platform-specific and are not documented here.
On success, place OK in status and return the newly created library handle.
The caller owns the library handle.
On failure, place an error status in status and return NULL.
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
23822383238423852386238723882389239023912392239323942395239623972398239924002401
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern TF_Library* TF_LoadLibrary(const char* library_filename,
TF_Status* status);
=head2 TF_GetOpList
=over 2
Get the OpList of OpDefs defined in the library pointed by lib_handle.
Returns a TF_Buffer. The memory pointed to by the result is owned by
lib_handle. The data in the buffer will be the serialized OpList proto for
ops defined in the library.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern TF_Buffer TF_GetOpList(TF_Library* lib_handle);
=head2 TF_DeleteLibraryHandle
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
2407240824092410241124122413241424152416241724182419242024212422242324242425242624272428242924302431=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_DeleteLibraryHandle(TF_Library* lib_handle);
=head2 TF_GetAllOpList
=over 2
Get the OpList of all OpDefs defined in this address space.
Returns a TF_Buffer, ownership of which is transferred to the caller
(and can be freed using TF_DeleteBuffer).
The data in the buffer will be the serialized OpList proto for ops registered
in this address space.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern TF_Buffer* TF_GetAllOpList(void);
=head2 TF_NewApiDefMap
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
245524562457245824592460246124622463246424652466246724682469247024712472247324742475
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_DeleteApiDefMap(TF_ApiDefMap* apimap);
=head2 TF_ApiDefMapPut
=over 2
Add ApiDefs to the map.
`text` corresponds to a text representation of an ApiDefs protocol message.
The provided ApiDefs will be merged with existing ones in the map, with
precedence given to the newly added version in case of conflicts with
previous calls to TF_ApiDefMapPut.
=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_ApiDefMapPut(TF_ApiDefMap* api_def_map,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
263426352636263726382639264026412642264326442645264626472648264926502651265226532654=back
/* From <tensorflow/c/c_api.h> */
TF_CAPI_EXPORT extern void TF_RegisterFilesystemPlugin(
const char* plugin_filename, TF_Status* status);
=head2 TF_NewShape
=over 2
Return a new, unknown rank shape object. The caller is responsible for
calling TF_DeleteShape to deallocate and destroy the returned shape.
=back
/* From <tensorflow/c/tf_shape.h> */
TF_CAPI_EXPORT extern TF_Shape* TF_NewShape();
=head2 TF_ShapeDims
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
282728282829283028312832283328342835283628372838283928402841284228432844284528462847284828492850285128522853285428552856285728582859286028612862286328642865=back
/* From <tensorflow/c/tf_tensor.h> */
TF_CAPI_EXPORT extern int64_t TF_TensorElementCount(const TF_Tensor* tensor);
=head2 TF_TensorBitcastFrom
=over 2
Copy the internal data representation of `from` to `to`. `new_dims` and
`num_new_dims` specify the new shape of the `to` tensor, `type` specifies its
data type. On success, *status is set to TF_OK and the two tensors share the
same data buffer.
This call requires that the `from` tensor and the given type and shape (dims
and num_dims) are "compatible" (i.e. they occupy the same number of bytes).
Specifically, given from_type_size = TF_DataTypeSize(TF_TensorType(from)):
ShapeElementCount(dims, num_dims) * TF_DataTypeSize(type)
must equal
TF_TensorElementCount(from) * from_type_size
where TF_ShapeElementCount would be the number of elements in a tensor with
the given shape.
In addition, this function requires:
* TF_DataTypeSize(TF_TensorType(from)) != 0
* TF_DataTypeSize(type) != 0
If any of the requirements are not met, *status is set to
TF_INVALID_ARGUMENT.
=back
/* From <tensorflow/c/tf_tensor.h> */
TF_CAPI_EXPORT extern void TF_TensorBitcastFrom(const TF_Tensor* from,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
30813082308330843085308630873088308930903091309230933094309530963097309830993100=back
/* From <tensorflow/c/tf_tstring.h> */
TF_CAPI_EXPORT extern void TF_StringDealloc(TF_TString *tstr);
=head2 TF_DataTypeSize
=over 2
TF_DataTypeSize returns the sizeof() for the underlying type corresponding
to the given TF_DataType enum value. Returns 0 for variable length types
(eg. TF_STRING) or on failure.
=back
/* From <tensorflow/c/tf_datatype.h> */
TF_CAPI_EXPORT extern size_t TF_DataTypeSize(TF_DataType dt);
=head2 TF_NewOpDefinitionBuilder
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
314531463147314831493150315131523153315431553156315731583159316031613162316331643165316631673168316931703171317231733174317531763177317831793180
TF_CAPI_EXPORT extern void TF_DeleteOpDefinitionBuilder(
TF_OpDefinitionBuilder* builder);
=head2 TF_OpDefinitionBuilderAddAttr
=over 2
Adds an attr to the given TF_OpDefinitionBuilder. The spec has
format "<name>:<type>" or "<name>:<type>=<default>"
where <name> matches regexp [a-zA-Z][a-zA-Z0-9_]*.
By convention, names containing only capital letters are reserved for
attributes whose values can be inferred by the operator implementation if not
supplied by the user. If the attribute name contains characters other than
capital letters, the operator expects the user to provide the attribute value
at operation runtime.
<type> can be:
"string", "int", "float", "bool", "type", "shape", or "tensor"
"numbertype", "realnumbertype", "quantizedtype"
(meaning "type" with a restriction on valid values)
"{int32,int64}" or {realnumbertype,quantizedtype,string}"
(meaning "type" with a restriction containing unions of value types)
"{\"foo\", \"bar\n baz\"}", or "{'foo', 'bar\n baz'}"
(meaning "string" with a restriction on valid values)
"list(string)", ..., "list(tensor)", "list(numbertype)", ...
(meaning lists of the above types)
"int >= 2" (meaning "int" with a restriction on valid values)
"list(string) >= 2", "list(int) >= 2"
(meaning "list(string)" / "list(int)" with length at least 2)
<default>, if included, should use the Proto text format
of <type>. For lists use [a, b, c] format.
Note that any attr specifying the length of an input or output will
get a default minimum of 1 unless the >= # syntax is used.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
3267326832693270327132723273327432753276327732783279328032813282328332843285328632873288
Sets the is_stateful property of the builder to the
given
value.
The op built by this builder is stateful
if
its behavior depends on some
state beyond its input tensors (e.g. variable reading op) or
if
it
has
a
side-effect (e.g. printing or asserting ops). Equivalently, stateless ops
must always produce the same output
for
the same input and have
no
side-effects.
By
default
Ops may be moved between devices. Stateful ops should either not
be moved, or should only be moved
if
that state can also be moved (e.g. via
some
sort
of save / restore). Stateful ops are guaranteed to never be
optimized away by Common Subexpression Elimination (CSE).
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetIsStateful(
TF_OpDefinitionBuilder* builder, bool is_stateful);
=head2 TF_OpDefinitionBuilderSetAllowsUninitializedInput
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
3341334233433344334533463347334833493350335133523353335433553356335733583359336033613362=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextNumInputs(
TF_ShapeInferenceContext* ctx);
=head2 TF_NewShapeHandle
=over 2
Returns a newly allocated shape handle. The shapes represented by these
handles may be queried or mutated with the corresponding
TF_ShapeInferenceContext... functions.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern TF_ShapeHandle* TF_NewShapeHandle();
=head2 TF_ShapeInferenceContextGetInput
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
33993400340134023403340434053406340734083409341034113412341334143415341634173418=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextScalar(
TF_ShapeInferenceContext* ctx);
=head2 TF_ShapeInferenceContextVectorFromSize
=over 2
Returns a newly-allocate shape handle representing a vector of the given
size. The returned handle should be freed with TF_DeleteShapeHandle.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextVectorFromSize(
TF_ShapeInferenceContext* ctx, size_t size);
=head2 TF_NewDimensionHandle
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
344134423443344434453446344734483449345034513452345334543455345634573458345934603461
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContext_GetAttrType(
TF_ShapeInferenceContext* ctx, const char* attr_name, TF_DataType* val,
TF_Status* status);
=head2 TF_ShapeInferenceContextRank
=over 2
Returns the rank of the shape represented by the given handle.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextRank(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);
=head2 TF_ShapeInferenceContextRankKnown
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
34663467346834693470347134723473347434753476347734783479348034813482348334843485348634873488348934903491349234933494349534963497349834993500350135023503350435053506350735083509351035113512351335143515351635173518351935203521352235233524352535263527352835293530353135323533353435353536353735383539354035413542354335443545354635473548354935503551355235533554355535563557355835593560356135623563356435653566356735683569357035713572
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern
int
TF_ShapeInferenceContextRankKnown(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);
=head2 TF_ShapeInferenceContextWithRank
=over 2
If <handle> has rank <rank>, or its rank is unknown, return OK and return the
shape with asserted rank in <*result>. Otherwise an error is placed into
`status`.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRank(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
TF_ShapeHandle* result, TF_Status* status);
=head2 TF_ShapeInferenceContextWithRankAtLeast
=over 2
If <handle> has rank at least <rank>, or its rank is unknown, return OK and
return the shape with asserted rank in <*result>. Otherwise an error is
placed into `status`.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtLeast(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
TF_ShapeHandle* result, TF_Status* status);
=head2 TF_ShapeInferenceContextWithRankAtMost
=over 2
If <handle> has rank at most <rank>, or its rank is unknown, return OK and
return the shape with asserted rank in <*result>. Otherwise an error is
placed into `status`.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtMost(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
TF_ShapeHandle* result, TF_Status* status);
=head2 TF_ShapeInferenceContextDim
=over 2
Places a handle to the ith dimension of the given shape into *result.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextDim(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t i,
TF_DimensionHandle* result);
=head2 TF_ShapeInferenceContextSubshape
=over 2
Returns in <*result> a sub-shape of <shape_handle>, with dimensions
[start:end]. <start> and <end> can be negative, to index from the end of the
shape. <start> and <end> are set to the rank of <shape_handle> if > rank of
<shape_handle>.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSubshape(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t start,
int64_t end, TF_ShapeHandle* result, TF_Status* status);
=head2 TF_ShapeInferenceContextSetUnknownShape
=over 2
Places an unknown shape in all outputs for the given inference context. Used
for shape inference functions with ops whose output shapes are unknown.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSetUnknownShape(
TF_ShapeInferenceContext* ctx, TF_Status* status);
=head2 TF_DimensionHandleValueKnown
=over 2
Returns whether the given handle represents a known dimension.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern int TF_DimensionHandleValueKnown(
TF_DimensionHandle* dim_handle);
=head2 TF_DimensionHandleValue
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
35763577357835793580358135823583358435853586358735883589359035913592359335943595359635973598359936003601360236033604=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern int64_t TF_DimensionHandleValue(
TF_DimensionHandle* dim_handle);
=head2 TF_ShapeInferenceContextConcatenateShapes
=over 2
Returns in <*result> the result of appending the dimensions of <second> to
those of <first>.
=back
/* From <tensorflow/c/ops.h> */
TF_CAPI_EXPORT extern void TF_ShapeInferenceContextConcatenateShapes(
TF_ShapeInferenceContext* ctx, TF_ShapeHandle* first,
TF_ShapeHandle* second, TF_ShapeHandle* result, TF_Status* status);
=head2 TF_DeleteShapeHandle
=over 2
Frees the given shape handle.
=back
/* From <tensorflow/c/ops.h> */
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
3651365236533654365536563657365836593660366136623663366436653666366736683669367036713672
and deleting entries as they are encountered.
If dirname itself is not readable or does not exist,
*undeleted_dir_count
is
set to 1,
*undeleted_file_count
is set to 0 and an appropriate status (e.g.
TF_NOT_FOUND) is returned.
If dirname and all its descendants were successfully deleted, TF_OK is
returned and both error counters are set to zero.
Otherwise,
while
traversing the tree, undeleted_file_count and
undeleted_dir_count are updated
if
an entry of the corresponding type could
not be deleted. The returned error status represents the reason that any one
of these entries could not be deleted.
Typical status codes:
* TF_OK - dirname
exists
and we were able to
delete
everything underneath
* TF_NOT_FOUND - dirname doesn't exist
* TF_PERMISSION_DENIED - dirname or some descendant is not writable
* TF_UNIMPLEMENTED - some underlying functions (like Delete) are not
implemented
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
36903691369236933694369536963697369836993700370137023703370437053706370737083709
TF_CAPI_EXPORT extern void TF_FileStat(const char* filename,
TF_FileStatistics* stats,
TF_Status* status);
=head2 TF_NewWritableFile
=over 2
Creates or truncates the given filename and returns a handle to be used for
appending data to the file. If status is TF_OK, *handle is updated and the
caller is responsible for freeing it (see TF_CloseWritableFile).
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_NewWritableFile(const char* filename,
TF_WritableFileHandle** handle,
TF_Status* status);
=head2 TF_CloseWritableFile
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
3773377437753776377737783779378037813782378337843785378637873788378937903791379237933794379537963797379837993800380138023803380438053806380738083809381038113812381338143815381638173818381938203821382238233824382538263827382838293830383138323833383438353836383738383839384038413842384338443845384638473848
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_DeleteFile(const char* filename,
TF_Status* status);
=head2 TF_StringStreamNext
=over 2
Retrieves the next item from the given TF_StringStream and places a pointer
to it in *result. If no more items are in the list, *result is set to NULL
and false is returned.
Ownership of the items retrieved with this function remains with the library.
Item points are invalidated after a call to TF_StringStreamDone.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern bool TF_StringStreamNext(TF_StringStream* list,
const char** result);
=head2 TF_StringStreamDone
=over 2
Frees the resources associated with given string list. All pointers returned
by TF_StringStreamNext are invalid after this call.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_StringStreamDone(TF_StringStream* list);
=head2 TF_GetChildren
=over 2
Retrieves the list of children of the given directory. You can iterate
through the list with TF_StringStreamNext. The caller is responsible for
freeing the list (see TF_StringStreamDone).
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern TF_StringStream* TF_GetChildren(const char* filename,
TF_Status* status);
=head2 TF_GetLocalTempDirectories
=over 2
Retrieves a list of directory names on the local machine that may be used for
temporary storage. You can iterate through the list with TF_StringStreamNext.
The caller is responsible for freeing the list (see TF_StringStreamDone).
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern TF_StringStream* TF_GetLocalTempDirectories(void);
=head2 TF_GetTempFileName
=over 2
Creates a temporary file name with an extension.
The caller is responsible for freeing the returned pointer.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern char* TF_GetTempFileName(const char* extension);
=head2 TF_NowNanos
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
38903891389238933894389538963897389838993900390139023903390439053906390739083909=head2 TF_StartThread
=over 2
Returns a new thread that is running work_func and is identified
(for debugging/performance-analysis) by thread_name.
The given param (which may be null) is passed to work_func when the thread
starts. In this way, data may be passed from the thread back to the caller.
Caller takes ownership of the result and must call TF_JoinThread on it
eventually.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern TF_Thread* TF_StartThread(const TF_ThreadOptions* options,
const char* thread_name,
void (*work_func)(void*),
void* param);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
400340044005400640074008400940104011401240134014401540164017401840194020402140224023
to the computation.
The TF_OpKernelContext pointer received by compute_func is owned by
TensorFlow and will be deleted once compute_func returns. It must not be used
after
this.
Finally,
when
TensorFlow
no
longer needs the kernel, it will call
delete_func
if
one is provided. This function will receive the pointer
returned in `create_func` or nullptr
if
no
`create_func` was provided.
The
caller
should pass the result of this function to
TF_RegisterKernelBuilder, which will take ownership of the pointer. If,
for
some reason, the kernel builder will not be registered, the
caller
should
delete
it
with
TF_DeleteKernelBuilder.
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern TF_KernelBuilder* TF_NewKernelBuilder(
const char* op_name, const char* device_name,
void* (*create_func)(TF_OpKernelConstruction*),
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
40344035403640374038403940404041404240434044404540464047404840494050405140524053
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern void TF_KernelBuilder_TypeConstraint(
TF_KernelBuilder* kernel_builder, const char* attr_name,
const TF_DataType type, TF_Status* status);
=head2 TF_KernelBuilder_HostMemory
=over 2
Specify that this kernel requires/provides an input/output arg
in host memory (instead of the default, device memory).
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern void TF_KernelBuilder_HostMemory(
TF_KernelBuilder* kernel_builder, const char* arg_name);
=head2 TF_KernelBuilder_Priority
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
41764177417841794180418141824183418441854186418741884189419041914192419341944195
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern void TF_GetInput(TF_OpKernelContext* ctx,
int
i,
TF_Tensor** tensor, TF_Status* status);
=head2 TF_InputRange
=over 2
Retrieves the start and stop indices, given the input name. Equivalent to
OpKernel::InputRange(). `args` will contain the result indices and status.
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern void TF_InputRange(TF_OpKernelContext* ctx,
const char* name,
TF_InputRange_Args* args);
=head2 TF_SetOutput
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
43814382438343844385438643874388438943904391439243934394439543964397439843994400=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern TF_StringView TF_GetOpKernelName(TF_OpKernelContext* ctx);
=head2 TF_GetResourceMgrDefaultContainerName
=over 2
Returns the default container of the resource manager in OpKernelContext.
The returned TF_StringView's underlying string is owned by the OpKernel and
has the same lifetime as the OpKernel.
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern TF_StringView TF_GetResourceMgrDefaultContainerName(
TF_OpKernelContext* ctx);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
465446554656465746584659466046614662466346644665466646674668466946704671467246734674
TF_OpKernelConstruction* ctx, const char* attr_name, TF_Bool* vals,
int
max_vals, TF_Status* status);
=head2 TF_OpKernelConstruction_GetAttrStringList
=over 2
Interprets the named kernel construction attribute as string array and fills
in `vals` and `lengths`, each of which must point to an array of length at
least `max_values`. *status is set to TF_OK. The elements of values will
point to addresses in `storage` which must be at least `storage_size` bytes
in length. Ideally, max_values would be set to list_size and `storage` would
be at least total_size, obtained from
TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
total_size).
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrStringList(
TF_OpKernelConstruction* ctx, const char* attr_name, char** vals,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
469546964697469846994700470147024703470447054706470747084709471047114712471347144715
TF_OpKernelConstruction* ctx, const char* attr_name, TF_Tensor** vals,
int
max_values, TF_Status* status);
=head2 TF_OpKernelConstruction_GetAttrFunction
=over 2
Interprets the named kernel construction attribute as a
tensorflow::NameAttrList and returns the serialized proto as TF_Buffer.
`status` will be set. The caller takes ownership of the returned TF_Buffer
(if not null) and is responsible for managing its lifetime.
=back
/* From <tensorflow/c/kernels.h> */
TF_CAPI_EXPORT extern TF_Buffer* TF_OpKernelConstruction_GetAttrFunction(
TF_OpKernelConstruction* ctx, const char* attr_name, TF_Status* status);
=head2 TF_OpKernelConstruction_HasAttr
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
479047914792479347944795479647974798479948004801480248034804480548064807480848094810
int
num_dims, TF_AllocatorAttributes* alloc_attrs, TF_Status* status);
=head2 TF_AssignVariable
=over 2
Expose higher level Assignment operation for Pluggable vendors to implement
in the plugin for Training. The API takes in the context with indices for
the input and value tensors. It also accepts the copy callback provided by
pluggable vendor to do the copying of the tensors. The caller takes ownership
of the `source` and `dest` tensors and is responsible for freeing them with
TF_DeleteTensor. This function will return an error when the following
conditions are met:
1. `validate_shape` is set to `true`
2. The variable is initialized
3. The shape of the value tensor doesn't match the shape of the variable
tensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
481648174818481948204821482248234824482548264827482848294830483148324833483448354836
TF_Status* status);
=head2 TF_AssignRefVariable
=over 2
Expose higher level Assignment operation for Pluggable vendors to implement
in the plugin for Training on ref variables. The API takes in the context
with indices for the input and value tensors. It also accepts the copy
callback provided by pluggable vendor to do the copying of the tensors. The
caller takes ownership of the `source` and `dest` tensors and is responsible
for freeing them with TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_AssignRefVariable(
TF_OpKernelContext* ctx, int input_ref_index, int output_ref_index,
int value_index, bool use_locking, bool validate_shape,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
TF_Tensor* dest),
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
48384839484048414842484348444845484648474848484948504851485248534854485548564857485848594860486148624863486448654866486748684869487048714872487348744875487648774878487948804881488248834884488548864887488848894890489148924893489448954896489748984899490049014902490349044905490649074908490949104911491249134914491549164917=head2 TF_AssignUpdateVariable
=over 2
Expose higher level AssignUpdate operation for Pluggable vendors to implement
in the plugin for Training. The API takes in the context with indices for the
input and value tensors. It also accepts the copy callback provided by
pluggable vendor to do the copying of the tensors and the update callback to
apply the arithmetic operation. The caller takes ownership of the `source`,
`dest`, `tensor` and `value` tensors and is responsible for freeing them with
TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_AssignUpdateVariable(
TF_OpKernelContext* ctx, int input_index, int value_index, int Op,
int isVariantType,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
TF_Tensor* dest),
void (*updateFunc)(TF_OpKernelContext* ctx, TF_Tensor* tensor,
TF_Tensor* value, int Op),
TF_Status* status);
=head2 TF_MaybeLockVariableInputMutexesInOrder
=over 2
This is a helper function which acquires mutexes in-order to provide
thread-safe way of performing weights update during the optimizer op. It
returns an opaque LockHolder handle back to plugin. This handle is passed to
the Release API for releasing the locks when the weight update is done. The
caller takes ownership of the `source` and `dest` tensors and is responsible
for freeing them with TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_MaybeLockVariableInputMutexesInOrder(
TF_OpKernelContext* ctx, bool do_lock, bool sparse, const int* const inputs,
size_t len,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
TF_Tensor* dest),
TF_VariableInputLockHolder** lockHolder, TF_Status* status);
=head2 TF_GetInputTensorFromVariable
=over 2
This interface returns `out` tensor which is updated corresponding to the
variable passed with input index. The caller takes ownership of the `source`
and `dest` tensors and is responsible for freeing them with TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_GetInputTensorFromVariable(
TF_OpKernelContext* ctx, int input, bool lock_held, bool isVariantType,
bool sparse,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
TF_Tensor* dest),
TF_Tensor** out, TF_Status* status);
=head2 TF_OpKernelContext_ForwardRefInputToRefOutput
=over 2
This interface forwards the reference from input to the output tensors
corresponding to the indices provided with `input_index` and `output_index`
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_OpKernelContext_ForwardRefInputToRefOutput(
TF_OpKernelContext* ctx, int32_t input_index, int32_t output_index);
=head2 TF_ReleaseVariableInputLockHolder
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
49674968496949704971497249734974497549764977497849794980498149824983498449854986498749884989499049914992499349944995499649974998499950005001500250035004500550065007
TF_Status* status);
=head2 TF_AddNVariant
=over 2
Expose higher level AddN operation for Pluggable vendors to implement
in the plugin for Variant data types. The API takes in the context and a
callback provided by pluggable vendor to do a Binary Add operation on the
tensors unwrapped from the Variant tensors. The caller takes ownership of the
`a`, `b` and `out` tensors and is responsible for freeing them with
TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_AddNVariant(
TF_OpKernelContext* ctx,
void (*binary_add_func)(TF_OpKernelContext* ctx, TF_Tensor* a, TF_Tensor* b,
TF_Tensor* out),
TF_Status* status);
=head2 TF_ZerosLikeVariant
=over 2
Expose higher level ZerosLike operation for Pluggable vendors to implement
in the plugin for Variant data types. The API takes in the context and a
callback provided by pluggable vendor to do a ZerosLike operation on the
tensors unwrapped from the Variant tensors. The caller takes ownership of the
`input` and `out` tensors and is responsible for freeing them with
TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_ZerosLikeVariant(
TF_OpKernelContext* ctx,
void (*zeros_like_func)(TF_OpKernelContext* ctx, TF_Tensor* input,
TF_Tensor* out),
TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
509350945095509650975098509951005101510251035104510551065107510851095110511151125113=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TF_DeviceList* TFE_ContextListDevices(TFE_Context* ctx,
TF_Status* status);
=head2 TFE_ContextClearCaches
=over 2
Clears the internal caches in the TFE context. Useful when reseeding random
ops.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_ContextClearCaches(TFE_Context* ctx);
=head2 TFE_ContextSetThreadLocalDevicePlacementPolicy
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
52345235523652375238523952405241524252435244524552465247524852495250525152525253525452555256525752585259526052615262526352645265526652675268=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern const char* TFE_TensorHandleDeviceName(
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_TensorHandleBackingDeviceName
=over 2
Returns the name of the device in whose memory `h` resides.
This function will block till the operation that produces `h` has completed.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern const char* TFE_TensorHandleBackingDeviceName(
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_TensorHandleCopySharingTensor
=over 2
Return a pointer to a new TFE_TensorHandle that shares the underlying tensor
with `h`. On success, `status` is set to OK. On failure, `status` reflects
the error and a nullptr is returned.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopySharingTensor(
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_TensorHandleResolve
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
5281528252835284528552865287528852895290529152925293529452955296529752985299530053015302530353045305530653075308530953105311531253135314531553165317531853195320
TF_CAPI_EXPORT extern TF_Tensor* TFE_TensorHandleResolve(TFE_TensorHandle* h,
TF_Status* status);
=head2 TFE_TensorHandleCopyToDevice
=over 2
Create a new TFE_TensorHandle with the same contents as 'h' but placed
in the memory of the device name 'device_name'.
If source and destination are the same device, then this creates a new handle
that shares the underlying buffer. Otherwise, it currently requires at least
one of the source or destination devices to be CPU (i.e., for the source or
destination tensor to be placed in host memory).
If async execution is enabled, the copy may be enqueued and the call will
return "non-ready" handle. Else, this function returns after the copy has
been done.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopyToDevice(
TFE_TensorHandle* h, TFE_Context* ctx, const char* device_name,
TF_Status* status);
=head2 TFE_TensorHandleTensorDebugInfo
=over 2
Retrieves TFE_TensorDebugInfo for `handle`.
If TFE_TensorHandleTensorDebugInfo succeeds, `status` is set to OK and caller
is responsible for deleting returned TFE_TensorDebugInfo.
If TFE_TensorHandleTensorDebugInfo fails, `status` is set to appropriate
error and nullptr is returned. This function can block till the operation
that produces `handle` has completed.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_TensorDebugInfo* TFE_TensorHandleTensorDebugInfo(
TFE_TensorHandle* h, TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
5328532953305331533253335334533553365337533853395340534153425343534453455346534753485349535053515352535353545355535653575358535953605361536253635364536553665367=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_DeleteTensorDebugInfo(
TFE_TensorDebugInfo* debug_info);
=head2 TFE_TensorDebugInfoOnDeviceNumDims
=over 2
Returns the number of dimensions used to represent the tensor on its device.
The number of dimensions used to represent the tensor on device can be
different from the number returned by TFE_TensorHandleNumDims.
The return value was current at the time of TFE_TensorDebugInfo creation.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern int TFE_TensorDebugInfoOnDeviceNumDims(
TFE_TensorDebugInfo* debug_info);
=head2 TFE_TensorDebugInfoOnDeviceDim
=over 2
Returns the number of elements in dimension `dim_index`.
Tensor representation on device can be transposed from its representation
on host. The data contained in dimension `dim_index` on device
can correspond to the data contained in another dimension in on-host
representation. The dimensions are indexed using the standard TensorFlow
major-to-minor order (slowest varying dimension first),
not the XLA's minor-to-major order.
On-device dimensions can be padded. TFE_TensorDebugInfoOnDeviceDim returns
the number of elements in a dimension after padding.
The return value was current at the time of TFE_TensorDebugInfo creation.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern int64_t TFE_TensorDebugInfoOnDeviceDim(
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
556755685569557055715572557355745575557655775578557955805581558255835584558555865587
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_OpSetAttrType(TFE_Op* op, const char* attr_name,
TF_DataType value);
=head2 TFE_OpSetAttrShape
=over 2
If the number of dimensions is unknown, `num_dims` must be set to
-1 and `dims` can be null. If a dimension is unknown, the
corresponding entry in the `dims` array must be -1.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_OpSetAttrShape(TFE_Op* op, const char* attr_name,
const int64_t* dims,
const int num_dims,
TF_Status* out_status);
=head2 TFE_OpSetAttrFunction
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
585458555856585758585859586058615862586358645865586658675868586958705871587258735874
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_ContextExportRunMetadata(TFE_Context* ctx,
TF_Buffer* buf,
TF_Status* status);
=head2 TFE_ContextStartStep
=over 2
Some TF ops need a step container to be set to limit the lifetime of some
resources (mostly TensorArray and Stack, used in while loop gradients in
graph mode). Calling this on a context tells it to start a step.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_ContextStartStep(TFE_Context* ctx);
=head2 TFE_ContextEndStep
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
59165917591859195920592159225923592459255926592759285929593059315932593359345935593659375938593959405941594259435944594559465947=back
/* From <tensorflow/c/eager/dlpack.h> */
TF_CAPI_EXPORT extern void TFE_CallDLManagedTensorDeleter(void* dlm_ptr);
=head2 TFE_OpReset
=over 2
Resets `op_to_reset` with `op_or_function_name` and `raw_device_name`. This
is for performance optimization by reusing an exiting unused op rather than
creating a new op every time. If `raw_device_name` is `NULL` or empty, it
does not set the device name. If it's not `NULL`, then it attempts to parse
and set the device name. It's effectively `TFE_OpSetDevice`, but it is faster
than separately calling it because if the existing op has the same
`raw_device_name`, it skips parsing and just leave as it is.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_OpReset(TFE_Op* op_to_reset,
const char* op_or_function_name,
const char* raw_device_name,
TF_Status* status);
=head2 TFE_ContextEnableGraphCollection
=over 2
Enables only graph collection in RunMetadata on the functions executed from
this context.
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
68856886688768886889689068916892689368946895689668976898689969006901690269036904
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_ContextAsyncWait(TFE_Context* ctx,
TF_Status* status);
=head2 TFE_TensorHandleDevicePointer
=over 2
This function will block till the operation that produces `h` has
completed. This is only valid on local TFE_TensorHandles. The pointer
returned will be on the device in which the TFE_TensorHandle resides (so e.g.
for a GPU tensor this will return a pointer to GPU memory). The pointer is
only guaranteed to be valid until TFE_DeleteTensorHandle is called on this
TensorHandle. Only supports POD data types.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void* TFE_TensorHandleDevicePointer(TFE_TensorHandle*,
TF_Status*);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
6914691569166917691869196920692169226923692469256926692769286929693069316932693369346935693669376938693969406941694269436944694569466947694869496950695169526953695469556956695769586959=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern size_t TFE_TensorHandleDeviceMemorySize(TFE_TensorHandle*,
TF_Status*);
=head2 TFE_NewTensorHandleFromDeviceMemory
=over 2
Creates a new TensorHandle from memory residing in the physical device
device_name. Takes ownership of the memory, and will call deleter to release
it after TF no longer needs it or in case of error.
Custom devices must use TFE_NewCustomDeviceTensorHandle instead.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandleFromDeviceMemory(
TFE_Context* ctx, const char* device_name, TF_DataType, const int64_t* dims,
int num_dims, void* data, size_t len,
void (*deallocator)(void* data, size_t len, void* arg),
void* deallocator_arg, TF_Status* status);
=head2 TFE_HostAddressSpace
=over 2
Retrieves the address space (i.e. job, replia, task) of the local host and
saves it in the buffer.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_HostAddressSpace(TFE_Context* ctx,
TF_Buffer* buf);
=head2 TFE_OpGetAttrs
=over 2
Fetch a reference to `op`'s attributes. The returned reference is only valid
while `op` is alive.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
70597060706170627063706470657066706770687069707070717072707370747075707670777078=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern bool TFE_IsCustomDevice(TFE_Context* ctx,
const char* device_name);
=head2 TFE_NewCustomDeviceTensorHandle
=over 2
Creates a new TensorHandle from memory residing in a custom device. Takes
ownership of the memory pointed to by `tensor_handle_data`, and calls
`methods.deallocator` to release it after TF no longer needs it or in case of
an error.
This call is similar to `TFE_NewTensorHandleFromDeviceMemory`, but supports
custom devices instead of physical devices and does not require blocking
waiting for exact shapes.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
728772887289729072917292729372947295729672977298729973007301730273037304730573067307
const char* key,
const char* value,
TF_Status* status);
=head2 TFE_GetConfigKeyValue
=over 2
Get configuration key and value using coordination service.
The config key must be set before getting its value. Getting value of
non-existing config keys will result in errors.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_GetConfigKeyValue(TFE_Context* ctx,
const char* key,
TF_Buffer* value_buf,
TF_Status* status);
=head2 TFE_DeleteConfigKeyValue
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
735173527353735473557356735773587359736073617362736373647365736673677368736973707371737273737374737573767377737873797380738173827383738473857386738773887389739073917392739373947395739673977398739974007401740274037404=over 2
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_WaitAtBarrier(TFE_Context* ctx,
const char* barrier_id,
int64_t barrier_timeout_in_ms,
TF_Status* status);
=head2 TF_GetNodesToPreserveListSize
=over 2
Get a set of node names that must be preserved. They can not be transformed
or removed during the graph transformation. This includes feed and fetch
nodes, keep_ops, init_ops. Fills in `num_values` and `storage_size`, they
will be used in `TF_GetNodesToPreserveList`.
=back
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_GetNodesToPreserveListSize(
const TF_GrapplerItem* item, int* num_values, size_t* storage_size,
TF_Status* status);
=head2 TF_GetNodesToPreserveList
=over 2
Get a set of node names that must be preserved. They can not be transformed
or removed during the graph transformation. This includes feed and fetch
nodes, keep_ops, init_ops. Fills in `values` and `lengths`, each of which
must point to an array of length at least `num_values`.
The elements of values will point to addresses in `storage` which must be at
least `storage_size` bytes in length. `num_values` and `storage` can be
obtained from TF_GetNodesToPreserveSize
Fails if storage_size is too small to hold the requested number of strings.
=back
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_GetNodesToPreserveList(
const TF_GrapplerItem* item, char** values, size_t* lengths, int num_values,
void* storage, size_t storage_size, TF_Status* status);
=head2 TF_GetFetchNodesListSize
=over 2
Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
they will be used in `TF_GetFetchNodesList`
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
741174127413741474157416741774187419742074217422742374247425742674277428742974307431
size_t* storage_size,
TF_Status* status);
=head2 TF_GetFetchNodesList
=over 2
Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
each of which must point to an array of length at least `num_values`.
The elements of values will point to addresses in `storage` which must be at
least `storage_size` bytes in length. `num_values` and `storage` can be
obtained from TF_GetFetchNodesSize
Fails if storage_size is too small to hold the requested number of strings.
=back
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_GetFetchNodesList(const TF_GrapplerItem* item,
char** values, size_t* lengths,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
7458745974607461746274637464746574667467746874697470747174727473747474757476747774787479748074817482748374847485748674877488748974907491
TF_GraphProperties* graph_properties);
=head2 TF_InferStatically
=over 2
Infer tensor shapes through abstract interpretation.
If assume_valid_feeds is true, it can help infer shapes in the fanout of fed
nodes. This may cause incorrectness in graph analyses, but is useful for
simulation or scheduling.
If aggressive_shape_inference is true, nodes are executed on the host to
identify output values when possible and does other aggressive strategies.
This may cause incorrectness in graph analyses, but is useful for simulation
or scheduling.
If include_input_tensor_values is true, the values of constant
tensors will included in the input properties.
If include_output_tensor_values is true, the values of constant tensors will
be included in the output properties.
=back
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_InferStatically(
TF_GraphProperties* graph_properties, TF_Bool assume_valid_feeds,
TF_Bool aggressive_shape_inference, TF_Bool include_input_tensor_values,
TF_Bool include_output_tensor_values, TF_Status* s);
=head2 TF_GetInputPropertiesListSize
=over 2
Get the size of input OpInfo::TensorProperties given node name.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
75587559756075617562756375647565756675677568756975707571757275737574757575767577
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_DeleteFunctionLibraryDefinition(
TF_FunctionLibraryDefinition* fn_lib);
=head2 TF_LookUpOpDef
=over 2
Shorthand for calling LookUp to get the OpDef from FunctionLibraryDefinition
given op name. The returned OpDef is represented by TF_Buffer.
=back
/* From <tensorflow/c/experimental/grappler/grappler.h> */
TF_CAPI_EXPORT extern void TF_LookUpOpDef(TF_FunctionLibraryDefinition* fn_lib,
const char* name, TF_Buffer* buf,
TF_Status* s);
=head2 TF_TensorSpecDataType
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
76747675767676777678767976807681768276837684768576867687768876897690769176927693
/* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
TF_CAPI_EXPORT extern TF_SavedModel* TF_LoadSavedModelWithTags(
const char* dirname, TFE_Context* ctx, const char* const* tags,
int
tags_len, TF_Status* status);
=head2 TF_DeleteSavedModel
=over 2
Deletes a TF_SavedModel, and frees any resources owned by it.
=back
/* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
TF_CAPI_EXPORT extern void TF_DeleteSavedModel(TF_SavedModel* model);
=head2 TF_GetSavedModelConcreteFunction
=over 2
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
775177527753775477557756775777587759776077617762776377647765776677677768776977707771777277737774777577767777777877797780778177827783778477857786778777887789779077917792
TF_CAPI_EXPORT extern TF_FunctionMetadata* TF_ConcreteFunctionGetMetadata(
TF_ConcreteFunction* func);
=head2 TF_ConcreteFunctionMakeCallOp
=over 2
Returns a TFE_Op suitable for executing this function. Caller must provide
all function inputs in `inputs`, and must not add any additional inputs on
the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
The caller is responsible for deleting the returned TFE_Op. If op
construction fails, `status` will be non-OK and the returned pointer will be
null.
TODO(bmzhao): Remove this function in a subsequent change; Design + implement
a Function Execution interface for ConcreteFunction that accepts a tagged
union of types (tensorflow::Value). This effectively requires moving much of
the implementation of function.py/def_function.py to C++, and exposing a
high-level API here. A strawman for what this interface could look like:
TF_Value* TF_ExecuteFunction(TFE_Context*, TF_ConcreteFunction*, TF_Value*
inputs, int num_inputs, TF_Status* status);
=back
/* From <tensorflow/c/experimental/saved_model/public/concrete_function.h> */
TF_CAPI_EXPORT extern TFE_Op* TF_ConcreteFunctionMakeCallOp(
TF_ConcreteFunction* func, TFE_TensorHandle** inputs, int num_inputs,
TF_Status* status);
=head2 TF_SignatureDefParamName
=over 2
Returns the name of the given parameter. The caller is not responsible for
freeing the returned char*.
=back
/* From <tensorflow/c/experimental/saved_model/public/signature_def_param.h> */
TF_CAPI_EXPORT extern const char* TF_SignatureDefParamName(
const TF_SignatureDefParam* param);
=head2 TF_SignatureDefParamTensorSpec
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
78157816781778187819782078217822782378247825782678277828782978307831783278337834
TF_CAPI_EXPORT extern TF_SignatureDefFunctionMetadata*
TF_SignatureDefFunctionGetMetadata(TF_SignatureDefFunction* func);
=head2 TF_SignatureDefFunctionMakeCallOp
=over 2
Returns a TFE_Op suitable for executing this function. Caller must provide
all function inputs in `inputs`, and must not add any additional inputs on
the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
The caller is responsible for deleting the returned TFE_Op. If op
construction fails, `status` will be non-OK and the returned pointer will be
null.
=back
/* From <tensorflow/c/experimental/saved_model/public/signature_def_function.h> */
TF_CAPI_EXPORT extern TFE_Op* TF_SignatureDefFunctionMakeCallOp(
TF_SignatureDefFunction* func, TFE_TensorHandle** inputs, int num_inputs,
TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
7891789278937894789578967897789878997900790179027903790479057906790779087909791079117912791379147915791679177918791979207921792279237924
/* From <tensorflow/c/experimental/saved_model/public/signature_def_param_list.h> */
TF_CAPI_EXPORT extern const TF_SignatureDefParam* TF_SignatureDefParamListGet(
const TF_SignatureDefParamList* list,
int
i);
=head2 TF_SignatureDefFunctionMetadataArgs
=over 2
Retrieves the arguments of the SignatureDefFunction. The caller is not
responsible for freeing the returned pointer.
=back
/* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
TF_SignatureDefFunctionMetadataArgs(
const TF_SignatureDefFunctionMetadata* list);
=head2 TF_SignatureDefFunctionMetadataReturns
=over 2
Retrieves the returns of the SignatureDefFunction. The caller is not
responsible for freeing the returned pointer.
=back
/* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
TF_SignatureDefFunctionMetadataReturns(
const TF_SignatureDefFunctionMetadata* list);
=head2 TF_EnableXLACompilation
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
808580868087808880898090809180928093809480958096809780988099810081018102810381048105
TF_CAPI_EXPORT extern char* TF_FunctionDebugString(TF_Function* func,
size_t* len);
=head2 TF_DequeueNamedTensor
=over 2
Caller must call TF_DeleteTensor() over the returned tensor. If the queue is
empty, this call is blocked.
Tensors are enqueued via the corresponding TF enqueue op.
TODO(hongm): Add support for `timeout_ms`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern TF_Tensor* TF_DequeueNamedTensor(TF_Session* session,
int tensor_id,
TF_Status* status);
=head2 TF_EnqueueNamedTensor
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
81088109811081118112811381148115811681178118811981208121812281238124812581268127
On success, enqueues `tensor` into a TF-managed FifoQueue
given
by
`tensor_id`, associated
with
`session`. There must be a graph node named
"fifo_queue_enqueue_<tensor_id>"
, to be executed by this API call. It reads
from a placeholder node
"arg_tensor_enqueue_<tensor_id>"
.
`tensor` is still owned by the
caller
. This call will be blocked
if
the queue
has
reached its capacity, and will be unblocked
when
the queued tensors again
drop below the capacity due to dequeuing.
Tensors are dequeued via the corresponding TF dequeue op.
TODO(hongm): Add support
for
`timeout_ms`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_EnqueueNamedTensor(TF_Session* session,
int tensor_id,
TF_Tensor* tensor,
TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
82898290829182928293829482958296829782988299830083018302830383048305830683078308=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_AttrBuilderCheckCanRunOnDevice(
TF_AttrBuilder* builder, const char* device_type, TF_Status* status);
=head2 TF_GetNumberAttrForOpListInput
=over 2
For argument number input_index, fetch the corresponding number_attr that
needs to be updated with the argument length of the input list.
Returns nullptr if there is any problem like op_name is not found, or the
argument does not support this attribute type.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern const char* TF_GetNumberAttrForOpListInput(
const char* op_name, int input_index, TF_Status* status);
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
8372837383748375837683778378837983808381838283838384838583868387838883898390839183928393839483958396839783988399840084018402840384048405840684078408
TF_CAPI_EXPORT extern void TFE_EnableCollectiveOps(TFE_Context* ctx,
const void* proto,
size_t proto_len,
TF_Status* status);
=head2 TFE_AbortCollectiveOps
=over 2
Aborts all ongoing collectives with the specified status. After abortion,
subsequent collectives will error with this status immediately. To reset the
collectives, create a new EagerContext.
This is intended to be used when a peer failure is detected.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_AbortCollectiveOps(TFE_Context* ctx,
TF_Status* status);
=head2 TFE_CollectiveOpsCheckPeerHealth
=over 2
Checks the health of collective ops peers. Explicit health check is needed in
multi worker collective ops to detect failures in the cluster. If a peer is
down, collective ops may hang.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_CollectiveOpsCheckPeerHealth(
TFE_Context* ctx, const char* task, int64_t timeout_in_ms,
TF_Status* status);
=head2 TF_NewShapeAndTypeList
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
847584768477847884798480848184828483848484858486848784888489849084918492849384948495849684978498849985008501850285038504850585068507850885098510851185128513851485158516851785188519852085218522852385248525
Infer shapes
for
the
given
`op`. The arguments mimic the arguments of the
`shape_inference::InferenceContext` constructor. Note the following:
- The inputs of the `op` are not used
for
shape inference. So, it is
OK to not have the inputs properly set in `op`. See `input_tensors`
if
you want shape inference to consider the input tensors of the
op
for
shape inference.
- The types need not be set in `input_shapes` as it is not used.
- The number of `input_tensors` should be the same as the number of items
in `input_shapes`.
The results are returned in `output_shapes` and
`output_resource_shapes_and_types`. The
caller
is responsible
for
freeing the
memory in these buffers by calling `TF_DeleteShapeAndTypeList`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_InferShapes(
TFE_Op* op, TF_ShapeAndTypeList* input_shapes, TF_Tensor** input_tensors,
TF_ShapeAndTypeList* input_tensor_as_shapes,
TF_ShapeAndTypeList** input_resource_shapes_and_types,
TF_ShapeAndTypeList** output_shapes,
TF_ShapeAndTypeList*** output_resource_shapes_and_types, TF_Status* status);
=head2 TF_ImportGraphDefOptionsSetValidateColocationConstraints
=over 2
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void
TF_ImportGraphDefOptionsSetValidateColocationConstraints(
TF_ImportGraphDefOptions* opts, unsigned char enable);
=head2 TF_LoadPluggableDeviceLibrary
=over 2
Load the library specified by library_filename and register the pluggable
device and related kernels present in that library. This function is not
supported on embedded on mobile and embedded platforms and will fail if
called.
Pass "library_filename" to a platform-specific mechanism for dynamically
loading a library. The rules for determining the exact location of the
library are platform-specific and are not documented here.
On success, returns the newly created library handle and places OK in status.
The caller owns the library handle.
lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod view on Meta::CPAN
272829303132333435363738394041424344454647An alternative to installing all the software listed on the
"bare metal"
host
NVIDIA Container Toolkit. See L<AI::TensorFlow::Libtensorflow::Manual::Quickstart/DOCKER IMAGES>
for
more information.
=head1 RUNTIME
When running C<libtensorflow>, your program will attempt to acquire quite a bit
of GPU VRAM. You can check if you have enough free VRAM by using the
C<nvidia-smi> command which displays resource information as well as which
processes are currently using the GPU. If C<libtensorflow> is not able to
allocate enough memory, it will crash with an out-of-memory (OOM) error. This
is typical when running multiple programs that both use the GPU.
If you have multiple GPUs, you can control which GPUs your program can access
by using the
L<C<CUDA_VISIBLE_DEVICES> environment variable|https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars>
provided by the underlying CUDA library. This is typically
done by setting the variable in a C<BEGIN> block before loading
L<AI::TensorFlow::Libtensorflow>:
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
9596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138
image_size
=> [ 512, 512 ],
},
);
my
$model_name
=
'centernet_hourglass_512x512'
;
say
"Selected model: $model_name : $model_name_to_params{$model_name}{handle}"
;
my
$model_uri
= URI->new(
$model_name_to_params
{
$model_name
}{handle} );
$model_uri
->query_form(
'tf-hub-format'
=>
'compressed'
);
my
$model_base
=
substr
(
$model_uri
->path, 1 ) =~ s,/,_,gr;
my
$model_archive_path
=
"${model_base}.tar.gz"
;
my
$http
= HTTP::Tiny->new;
for
my
$download
( [
$model_uri
=>
$model_archive_path
],) {
my
(
$uri
,
$path
) =
@$download
;
say
"Downloading $uri to $path"
;
next
if
-e
$path
;
$http
->mirror(
$uri
,
$path
);
}
use
Archive::Extract;
my
$ae
= Archive::Extract->new(
archive
=>
$model_archive_path
);
die
"Could not extract archive"
unless
$ae
->extract(
to
=>
$model_base
);
my
$saved_model
= path(
$model_base
)->child(
'saved_model.pb'
);
say
"Saved model is in $saved_model"
if
-f
$saved_model
;
# Get the labels
my
$response
=
$http
->get(
'https://raw.githubusercontent.com/tensorflow/models/a4944a57ad2811e1f6a7a87589a9fc8a776e8d3c/object_detection/data/mscoco_label_map.pbtxt'
);
my
%labels_map
=
$response
->{content} =~ m<
(?:item \s+ \{ \s+
\Qname:\E \s+
"[^"
]+" \s+
\Qid:\E \s+ (\d+) \s+
\Qdisplay_name:\E \s+
"([^"
]+)" \s+
})+
>sgx;
my
$label_count
= List::Util::max
keys
%labels_map
;
say
"We have a label count of $label_count. These labels include: "
,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
164165166167168169170171172173174175176177178179180181182183184
op
=>
$graph
->OperationByName(
'serving_default_input_tensor'
),
dict
=> {
input_tensor
=> 0,
}
},
out
=> {
op
=>
$graph
->OperationByName(
'StatefulPartitionedCall'
),
dict
=> {
detection_boxes
=> 0,
detection_classes
=> 1,
detection_scores
=> 2,
num_detections
=> 3,
}
},
);
my
%outputs
;
%outputs
=
map
{
my
$put_type
=
$_
;
my
$op
=
$ops
{
$put_type
}{op};
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
194195196197198199200201202203204205206207208209210211212213214
});
}
keys
%$port_dict
}
}
keys
%ops
;
p
%outputs
;
use
HTML::Tiny;
my
%images_for_test_to_uri
= (
"beach_scene"
=>
'https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image2.jpg?raw=true'
,
);
my
@image_names
=
sort
keys
%images_for_test_to_uri
;
my
$h
= HTML::Tiny->new;
my
$image_name
=
'beach_scene'
;
if
( IN_IPERL ) {
IPerl->html(
$h
->a( {
href
=>
$images_for_test_to_uri
{
$image_name
} },
$h
->img({
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
217218219220221222223224225226227228229230231232233234235236237238239240241242
width
=>
'100%'
,
})
),
);
}
sub
load_image_to_pdl {
my
(
$uri
,
$image_size
) =
@_
;
my
$http
= HTTP::Tiny->new;
my
$response
=
$http
->get(
$uri
);
die
"Could not fetch image from $uri"
unless
$response
->{success};
say
"Downloaded $uri"
;
my
$img
= Imager->new;
$img
->
read
(
data
=>
$response
->{content} );
# Create PDL ndarray from Imager data in-memory.
my
$data
;
$img
->
write
(
data
=> \
$data
,
type
=>
'raw'
)
or
die
"could not write "
.
$img
->errstr;
die
"Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
if
$img
->getchannels != 3;
# $data is packed as PDL->dims == [w,h] with RGB pixels
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348undef
;
my
$tftensor_output_by_name
=
$RunSession
->(
$session
,
$t
);
my
%pdl_output_by_name
=
map
{
$_
=> FloatTFTensorToPDL(
$tftensor_output_by_name
->{
$_
} )
}
keys
$tftensor_output_by_name
->%*;
undef
;
my
$min_score_thresh
= 0.30;
my
$which_detect
= which(
$pdl_output_by_name
{detection_scores} >
$min_score_thresh
);
my
%subset
;
$subset
{detection_boxes} =
$pdl_output_by_name
{detection_boxes}->dice(
'X'
,
$which_detect
);
$subset
{detection_classes} =
$pdl_output_by_name
{detection_classes}->dice(
$which_detect
);
$subset
{detection_scores} =
$pdl_output_by_name
{detection_scores}->dice(
$which_detect
);
$subset
{detection_class_labels}->@* =
map
{
$labels_map
{
$_
} }
$subset
{detection_classes}->list;
p
%subset
;
my
$plot_output_path
=
'objects-detected.png'
;
my
$gp
= gpwin(
'pngcairo'
,
font
=>
",12"
,
output
=>
$plot_output_path
,
aa
=> 2,
size
=> [10] );
my
@qual_cmap
= (
'#a6cee3'
,
'#1f78b4'
,
'#b2df8a'
,
'#33a02c'
,
'#fb9a99'
,
'#e31a1c'
,
'#fdbf6f'
,
'#ff7f00'
,
'#cab2d6'
);
$gp
->options(
map
{
my
$idx
=
$_
;
my
$lc_rgb
=
$qual_cmap
[
$subset
{detection_classes}->slice(
"($idx)"
)->squeeze %
@qual_cmap
];
my
$box_corners_yx_norm
=
$subset
{detection_boxes}->slice([],
$idx
,[0,0,0]);
$box_corners_yx_norm
->reshape(2,2);
my
$box_corners_yx_img
=
$box_corners_yx_norm
*
$pdl_images
[0]->shape->slice(
'-1:-2'
);
my
$from_xy
=
join
","
,
$box_corners_yx_img
->slice(
'-1:0,(0)'
)->list;
my
$to_xy
=
join
","
,
$box_corners_yx_img
->slice(
'-1:0,(1)'
)->list;
my
$label_xy
=
join
","
,
$box_corners_yx_img
->at(1,1),
$box_corners_yx_img
->at(0,1);
(
[
object
=> [
"rect"
=>
from
=>
$from_xy
,
to
=>
$to_xy
,
qq{front fs empty border lc rgb "$lc_rgb" lw 5}
], ],
[
label
=> [
sprintf
(
"%s: %.1f"
,
$subset
{detection_class_labels}[
$idx
],
100
*$subset
{detection_scores}->at(
$idx
,0) ) =>
at
=>
$label_xy
,
'left'
,
offset
=>
'character 0,-0.25'
,
qq{font ",12" boxed front tc rgb "#ffffff"}
], ],
)
} 0..
$subset
{detection_boxes}->dim(1)-1
);
$gp
->plot(
topcmds
=>
q{set style textbox opaque fc "#505050f0" noborder}
,
square
=> 1,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398=pod
=encoding UTF-8
=head1 NAME
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubCenterNetObjDetect - Using TensorFlow to do object detection using a pre-trained model
=head1 SYNOPSIS
The following tutorial is based on the L<TensorFlow Hub Object Detection Colab notebook|https://www.tensorflow.org/hub/tutorials/tf2_object_detection>. It uses a pre-trained model based on the I<CenterNet> architecture trained on the I<COCO 2017> dat...
Some of this code is identical to that of C<InferenceUsingTFHubMobileNetV2Model> notebook. Please look there for an explanation for that code. As stated there, this will later be wrapped up into a high-level library to hide the details behind an API.
=head1 COLOPHON
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
=over
=item *
C<PDL::Graphics::Gnuplot> requires C<gnuplot>.
=back
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.
=head1 TUTORIAL
=head2 Load the library
First, we need to load the C<AI::TensorFlow::Libtensorflow> library and more helpers. We then create an C<AI::TensorFlow::Libtensorflow::Status> object and helper function to make sure that the calls to the C<libtensorflow> C library are working prop...
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
497498499500501502503504505506507508509510511512513514515516517
},
);
my
$model_name
=
'centernet_hourglass_512x512'
;
say
"Selected model: $model_name : $model_name_to_params{$model_name}{handle}"
;
We download the model to the current directory and then extract the model to a folder
with
the name
given
in C<
$model_base
>.
my
$model_uri
= URI->new(
$model_name_to_params
{
$model_name
}{handle} );
$model_uri
->query_form(
'tf-hub-format'
=>
'compressed'
);
my
$model_base
=
substr
(
$model_uri
->path, 1 ) =~ s,/,_,gr;
my
$model_archive_path
=
"${model_base}.tar.gz"
;
my
$http
= HTTP::Tiny->new;
for
my
$download
( [
$model_uri
=>
$model_archive_path
],) {
my
(
$uri
,
$path
) =
@$download
;
say
"Downloading $uri to $path"
;
next
if
-e
$path
;
$http
->mirror(
$uri
,
$path
);
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
520521522523524525526527528529530531532533534535536537538539540541542
my
$ae
= Archive::Extract->new(
archive
=>
$model_archive_path
);
die
"Could not extract archive"
unless
$ae
->extract(
to
=>
$model_base
);
my
$saved_model
= path(
$model_base
)->child(
'saved_model.pb'
);
say
"Saved model is in $saved_model"
if
-f
$saved_model
;
We need to download the COCO 2017 classification labels and parse out the mapping from the numeric
index
to the textual descriptions.
# Get the labels
my
$response
=
$http
->get(
'https://raw.githubusercontent.com/tensorflow/models/a4944a57ad2811e1f6a7a87589a9fc8a776e8d3c/object_detection/data/mscoco_label_map.pbtxt'
);
my
%labels_map
=
$response
->{content} =~ m<
(?:item \s+ \{ \s+
\Qname:\E \s+
"[^"
]+" \s+
\Qid:\E \s+ (\d+) \s+
\Qdisplay_name:\E \s+
"([^"
]+)" \s+
})+
>sgx;
my
$label_count
= List::Util::max
keys
%labels_map
;
say
"We have a label count of $label_count. These labels include: "
,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
584585586587588589590591592593594595596597598599600601602603604=item -
C<detection_boxes>: a C<tf.float32> tensor of shape [N, 4] containing bounding box coordinates in the following order: [ymin, xmin, ymax, xmax].
=item -
C<detection_classes>: a C<tf.int> tensor of shape [N] containing detection class index from the label file.
=item -
C<detection_scores>: a C<tf.float32> tensor of shape [N] containing detection scores.
=back
=back
Note that the above documentation has two errors: both C<num_detections> and C<detection_classes> are not of type C<tf.int>, but are actually C<tf.float32>.
Now we can load the model from that folder with the tag set C<[ 'serve' ]> by using the C<LoadFromSavedModel> constructor to create a C<::Graph> and a C<::Session> for that graph.
my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
616617618619620621622623624625626627628629630631632633634635636
op
=>
$graph
->OperationByName(
'serving_default_input_tensor'
),
dict
=> {
input_tensor
=> 0,
}
},
out
=> {
op
=>
$graph
->OperationByName(
'StatefulPartitionedCall'
),
dict
=> {
detection_boxes
=> 0,
detection_classes
=> 1,
detection_scores
=> 2,
num_detections
=> 3,
}
},
);
my
%outputs
;
%outputs
=
map
{
my
$put_type
=
$_
;
my
$op
=
$ops
{
$put_type
}{op};
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704
}
}
keys
%ops
;
p
%outputs
;
Now we can get the following testing image from GitHub.
my
%images_for_test_to_uri
= (
"beach_scene"
=>
'https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image2.jpg?raw=true'
,
);
my
@image_names
=
sort
keys
%images_for_test_to_uri
;
my
$h
= HTML::Tiny->new;
my
$image_name
=
'beach_scene'
;
if
( IN_IPERL ) {
IPerl->html(
$h
->a( {
href
=>
$images_for_test_to_uri
{
$image_name
} },
$h
->img({
src
=>
$images_for_test_to_uri
{
$image_name
},
alt
=>
$image_name
,
width
=>
'100%'
,
})
),
);
}
=head2 Download the test image and transform it into suitable input data
We now fetch the image and prepare it to be in the needed format by using C<Imager>. Note that this model does not need the input image to be of a certain size so no resizing or padding is required.
Then we turn the C<Imager> data into a C<PDL> ndarray. Since we just need the 3 channels of the image as they are, they can be stored directly in a C<PDL> ndarray of type C<byte>.
The reason why we need to concatenate the C<PDL> ndarrays here despite the model only taking a single image at a time is to get an ndarray with four (4) dimensions with the last C<PDL> dimension of size one (1).
sub load_image_to_pdl {
my ($uri, $image_size) = @_;
my $http = HTTP::Tiny->new;
my $response = $http->get( $uri );
die "Could not fetch image from $uri" unless $response->{success};
say "Downloaded $uri";
my $img = Imager->new;
$img->read( data => $response->{content} );
# Create PDL ndarray from Imager data in-memory.
my $data;
$img->write( data => \$data, type => 'raw' )
or die "could not write ". $img->errstr;
die "Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
if $img->getchannels != 3;
# $data is packed as PDL->dims == [w,h] with RGB pixels
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822
my
$tftensor_output_by_name
=
$RunSession
->(
$session
,
$t
);
my
%pdl_output_by_name
=
map
{
$_
=> FloatTFTensorToPDL(
$tftensor_output_by_name
->{
$_
} )
}
keys
$tftensor_output_by_name
->%*;
undef
;
=head2 Results summary
Then we use a score threshold to select the objects of interest.
my $min_score_thresh = 0.30;
my $which_detect = which( $pdl_output_by_name{detection_scores} > $min_score_thresh );
my %subset;
$subset{detection_boxes} = $pdl_output_by_name{detection_boxes}->dice('X', $which_detect);
$subset{detection_classes} = $pdl_output_by_name{detection_classes}->dice($which_detect);
$subset{detection_scores} = $pdl_output_by_name{detection_scores}->dice($which_detect);
$subset{detection_class_labels}->@* = map { $labels_map{$_} } $subset{detection_classes}->list;
p %subset;
The following uses the bounding boxes and class label information to draw boxes and labels on top of the image using Gnuplot.
use PDL::Graphics::Gnuplot;
my $plot_output_path = 'objects-detected.png';
my $gp = gpwin('pngcairo', font => ",12", output => $plot_output_path, aa => 2, size => [10] );
my @qual_cmap = ('#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6');
$gp->options(
map {
my $idx = $_;
my $lc_rgb = $qual_cmap[ $subset{detection_classes}->slice("($idx)")->squeeze % @qual_cmap ];
my $box_corners_yx_norm = $subset{detection_boxes}->slice([],$idx,[0,0,0]);
$box_corners_yx_norm->reshape(2,2);
my $box_corners_yx_img = $box_corners_yx_norm * $pdl_images[0]->shape->slice('-1:-2');
my $from_xy = join ",", $box_corners_yx_img->slice('-1:0,(0)')->list;
my $to_xy = join ",", $box_corners_yx_img->slice('-1:0,(1)')->list;
my $label_xy = join ",", $box_corners_yx_img->at(1,1), $box_corners_yx_img->at(0,1);
(
[ object => [ "rect" =>
from => $from_xy, to => $to_xy,
qq{front fs empty border lc rgb "$lc_rgb" lw 5} ], ],
[ label => [
sprintf("%s: %.1f",
$subset{detection_class_labels}[$idx],
100*$subset{detection_scores}->at($idx,0) ) =>
at => $label_xy, 'left',
offset => 'character 0,-0.25',
qq{font ",12" boxed front tc rgb "#ffffff"} ], ],
)
} 0..$subset{detection_boxes}->dim(1)-1
);
$gp->plot(
topcmds => q{set style textbox opaque fc "#505050f0" noborder},
square => 1,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod view on Meta::CPAN
832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876
my
$total
= du( {
'human-readable'
=> 1,
dereference
=> 1 },
$model_archive_path
,
$model_base
);
say
"Disk space usage: $total"
;
undef
;
=head1 CPANFILE
requires 'AI::TensorFlow::Libtensorflow';
requires 'AI::TensorFlow::Libtensorflow::DataType';
requires 'Archive::Extract';
requires 'Data::Printer';
requires 'Data::Printer::Filter::PDL';
requires 'FFI::Platypus::Buffer';
requires 'FFI::Platypus::Memory';
requires 'File::Which';
requires 'Filesys::DiskUsage';
requires 'HTML::Tiny';
requires 'HTTP::Tiny';
requires 'Imager';
requires 'List::Util', '1.56';
requires 'PDL';
requires 'PDL::Graphics::Gnuplot';
requires 'Path::Tiny';
requires 'Syntax::Construct';
requires 'Text::Table::Tiny';
requires 'URI';
requires 'constant';
requires 'feature';
requires 'lib::projectroot';
requires 'strict';
requires 'utf8';
requires 'warnings';
=head1 AUTHOR
Zakariyya Mughal <zmughal@cpan.org>
=head1 COPYRIGHT AND LICENSE
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
626364656667686970717273747576777879808182
memcpy scalar_to_pointer( ${
$pdl
->get_dataref} ),
scalar_to_pointer( ${
$t
->Data} ),
$t
->ByteSize;
$pdl
->upd_data;
$pdl
;
}
# Model handle
$model_uri
->query_form(
'tf-hub-format'
=>
'compressed'
);
my
$model_base
=
substr
(
$model_uri
->path, 1 ) =~ s,/,_,gr;
my
$model_archive_path
=
"${model_base}.tar.gz"
;
my
$model_sequence_length
= 393_216;
# bp
# Human targets from Basenji2 dataset
my
$targets_uri
= URI->new(
'https://raw.githubusercontent.com/calico/basenji/master/manuscripts/cross2020/targets_human.txt'
);
my
$targets_path
=
'targets_human.txt'
;
# Human reference genome
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
107108109110111112113114115116117118119120121122123124125126127128
die
"Could not extract archive"
unless
$ae
->extract(
to
=>
$model_base
);
}
if
( digest_file_hex(
$hg_gz_path
,
"MD5"
) eq
$hg_md5_digest
) {
say
"MD5 sum for $hg_gz_path OK"
;
}
else
{
die
"Digest for $hg_gz_path failed"
;
}
(
my
$hg_uncompressed_path
=
$hg_gz_path
) =~ s/\.gz$//;
my
$hg_bgz_path
=
"${hg_uncompressed_path}.bgz"
;
use
IPC::Run;
if
( ! -e
$hg_bgz_path
) {
IPC::Run::run(
[
qw(gunzip -c)
],
'<'
,
$hg_gz_path
,
'|'
,
[
qw(bgzip -c)
],
'>'
,
$hg_bgz_path
);
}
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
130131132133134135136137138139140141142143144145146147148149150151152my
$hg_bgz_fai_path
=
"${hg_bgz_path}.fai"
;
if
( ! -e
$hg_bgz_fai_path
) {
my
$faidx_tool
= Bio::Tools::Run::Samtools->new(
-command
=>
'faidx'
);
$faidx_tool
->run(
-fas
=>
$hg_bgz_path
)
or
die
"Could not index FASTA file $hg_bgz_path: "
.
$faidx_tool
->error_string;
}
sub
saved_model_cli {
my
(
@rest
) =
@_
;
if
( File::Which::which(
'saved_model_cli'
)) {
system
(
qw(saved_model_cli)
,
@rest
) == 0
or
die
"Could not run saved_model_cli"
;
}
else
{
warn
"saved_model_cli(): Install the tensorflow Python package to get the `saved_model_cli` command.\n"
;
return
-1;
}
}
say
"Checking with saved_model_cli scan:"
;
saved_model_cli(
qw(scan)
,
qw(--dir)
=>
$model_base
,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
296297298299300301302303304305306307308309310311312313314315sub
center {
my
$self
=
shift
;
my
$center
=
int
( (
$self
->start +
$self
->end ) / 2 );
my
$delta
= (
$self
->start +
$self
->end ) % 2;
return
$center
+
$delta
;
}
sub
resize {
my
(
$self
,
$width
) =
@_
;
my
$new_interval
=
$self
->clone;
my
$center
=
$self
->center;
my
$half
=
int
( (
$width
-1) / 2 );
my
$offset
= (
$width
-1) % 2;
$new_interval
->start(
$center
-
$half
-
$offset
);
$new_interval
->end(
$center
+
$half
);
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361
sub
_op_stringify {
sprintf
"%s:%s"
,
$_
[0]->seq_id //
"(no sequence)"
,
$_
[0]->to_FTstring }
}
#####
{
say
"Testing interval resizing:\n"
;
sub
_debug_resize {
my
(
$interval
,
$to
,
$msg
) =
@_
;
my
$resized_interval
=
$interval
->resize(
$to
);
die
"Wrong interval size for $interval --($to)--> $resized_interval"
unless
$resized_interval
->
length
==
$to
;
say
sprintf
"Interval: %s -> %s, length %2d : %s"
,
$interval
,
$resized_interval
,
$resized_interval
->
length
,
$msg
;
}
for
my
$interval_spec
( [4, 8], [5, 8], [5, 9], [6, 9]) {
my
(
$start
,
$end
) =
@$interval_spec
;
my
$test_interval
= Interval->new(
-seq_id
=>
'chr11'
,
-start
=>
$start
,
-end
=>
$end
);
say
sprintf
"Testing interval %s with length %d"
,
$test_interval
,
$test_interval
->
length
;
say
"-----"
;
for
(0..5) {
my
$base
=
$test_interval
->
length
;
my
$to
=
$base
+
$_
;
_debug_resize
$test_interval
,
$to
,
"$base -> $to (+ $_)"
;
}
say
""
;
}
}
undef
;
use
Bio::DB::HTS::Faidx;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458say
"1 base: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 35_082_742 + 1,
-end
=> 35_082_742 + 1 ) );
say
"3 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 1,
-end
=> 1 )->resize(3) );
say
"5 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=>
$hg_db
->
length
(
'chr11'
),
-end
=>
$hg_db
->
length
(
'chr11'
) )->resize(5) );
say
"chr11 is of length "
,
$hg_db
->
length
(
'chr11'
);
say
"chr11 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 1,
-end
=>
$hg_db
->
length
(
'chr11'
) )->resize(
$hg_db
->
length
(
'chr11'
) ) );
}
my
$target_interval
= Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 35_082_742 + 1,
# BioPerl is 1-based
-end
=> 35_197_430 );
say
"Target interval: $target_interval with length @{[ $target_interval->length ]}"
;
die
"Target interval is not $model_central_base_pairs_length bp long"
unless
$target_interval
->
length
==
$model_central_base_pairs_length
;
say
"Target sequence is "
, seq_info extract_sequence(
$hg_db
,
$target_interval
);
say
""
;
my
$resized_interval
=
$target_interval
->resize(
$model_sequence_length
);
say
"Resized interval: $resized_interval with length @{[ $resized_interval->length ]}"
;
die
"resize() is not working properly!"
unless
$resized_interval
->
length
==
$model_sequence_length
;
my
$seq
= extract_sequence(
$hg_db
,
$resized_interval
);
say
"Resized sequence is "
, seq_info(
$seq
);
my
$sequence_one_hot
= one_hot_dna(
$seq
)->dummy(-1);
say
$sequence_one_hot
->info;
undef
;
use
Devel::Timer;
my
$t
= Devel::Timer->new;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
556557558559560561562563564565566567568569570571572573574575576577578579580581582say
"Disk space usage: $total"
;
undef
;
__END__
=pod
=encoding UTF-8
=head1 NAME
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubEnformerGeneExprPredModel - Using TensorFlow to do gene expression prediction using a pre-trained model
=head1 SYNOPSIS
The following tutorial is based on the L<Enformer usage notebook|https://github.com/deepmind/deepmind-research/blob/master/enformer/enformer-usage.ipynb>. It uses a pre-trained model based on a transformer architecture trained as described in Avsec e...
Running the code requires an Internet connection to download the model (from Google servers) and datasets (from GitHub, UCSC, and NIH).
Some of this code is identical to that of C<InferenceUsingTFHubMobileNetV2Model> notebook. Please look there for explanation for that code. As stated there, this will later be wrapped up into a high-level library to hide the details behind an API.
B<NOTE>: If running this model, please be aware that
=over
=item *
the Docker image takes 3 GB or more of disk space;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
592593594595596597598599600601602603604605606607608609610611612613614615616=head1 COLOPHON
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
You will also need the executables C<gunzip>, C<bgzip>, and C<samtools>. Furthermore,
=over
=item *
C<Bio::DB::HTS> requires C<libhts> and
=item *
C<PDL::Graphics::Gnuplot> requires C<gnuplot>.
=back
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.
=head1 TUTORIAL
=head2 Load the library
First, we need to load the C<AI::TensorFlow::Libtensorflow> library and more helpers. We then create an C<AI::TensorFlow::Libtensorflow::Status> object and helper function to make sure that the calls to the C<libtensorflow> C library are working prop...
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
682683684685686687688689690691692693694695696697698699700701702
}
=head2 Download model and data
=over
=item *
L<Enformer model|https://tfhub.dev/deepmind/enformer/1> from
> Avsec Ž, Agarwal V, Visentin D, Ledsam JR, Grabska-Barwinska A, Taylor KR, Assael Y, Jumper J, Kohli P, Kelley DR. Effective gene expression prediction from sequence by integrating long-range interactions. I<Nat Methods>. 2021 Oct;B<18(10)>:1196...
=item *
L<Human target dataset|https://github.com/calico/basenji/tree/master/manuscripts/cross2020> from
> Kelley DR. Cross-species regulatory sequence activity prediction. I<PLoS Comput Biol>. 2020 Jul 20;B<16(7)>:e1008050. doi: L<10.1371/journal.pcbi.1008050|https://doi.org/10.1371/journal.pcbi.1008050>. PMID: L<32687525|https://pubmed.ncbi.nlm.nih....
=item *
L<UCSC hg38 genome|https://www.ncbi.nlm.nih.gov/assembly/GCA_000001405.15>. More info at L<http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/>; L<Genome Reference Consortium Human Build 38|https://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.26/>...
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
706707708709710711712713714715716717718719720721722723724725726=item *
L<ClinVar|https://www.ncbi.nlm.nih.gov/clinvar/> file
> Landrum MJ, Lee JM, Benson M, Brown GR, Chao C, Chitipiralla S, Gu B, Hart J, Hoffman D, Jang W, Karapetyan K, Katz K, Liu C, Maddipatla Z, Malheiro A, McDaniel K, Ovetsky M, Riley G, Zhou G, Holmes JB, Kattman BL, Maglott DR. ClinVar: improving ...
=back
# Model handle
my $model_uri = URI->new( 'https://tfhub.dev/deepmind/enformer/1' );
$model_uri->query_form( 'tf-hub-format' => 'compressed' );
my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
my $model_archive_path = "${model_base}.tar.gz";
my $model_sequence_length = 393_216; # bp
# Human targets from Basenji2 dataset
my $targets_uri = URI->new('https://raw.githubusercontent.com/calico/basenji/master/manuscripts/cross2020/targets_human.txt');
my $targets_path = 'targets_human.txt';
# Human reference genome
my $hg_uri = URI->new("http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz");
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
738739740741742743744745746747748749750751752753754755756757
[
$hg_uri
=>
$hg_gz_path
],
[
$clinvar_uri
=>
$clinvar_path
],) {
my
(
$uri
,
$path
) =
@$download
;
say
"Downloading $uri to $path"
;
next
if
-e
$path
;
$http
->mirror(
$uri
,
$path
);
}
B<STREAM (STDOUT)>:
Downloading https://tfhub.dev/deepmind/enformer/1?tf-hub-
format
=compressed to deepmind_enformer_1.tar.gz
Downloading https://raw.githubusercontent.com/calico/basenji/master/manuscripts/cross2020/targets_human.txt to targets_human.txt
Downloading http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.fa.gz to hg38.fa.gz
Downloading https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/clinvar.vcf.gz to clinvar.vcf.gz
Now we
=over
=item 1.
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
794795796797798799800801802803804805806807808809810811812813814815=item 1.
convert the gzip'd file into a block gzip'd file and
=item 2.
index that C<.bgz> file using C<faidx> from C<samtools>.
=back
(my $hg_uncompressed_path = $hg_gz_path) =~ s/\.gz$//;
my $hg_bgz_path = "${hg_uncompressed_path}.bgz";
use IPC::Run;
if( ! -e $hg_bgz_path ) {
IPC::Run::run(
[ qw(gunzip -c) ], '<', $hg_gz_path,
'|',
[ qw(bgzip -c) ], '>', $hg_bgz_path
);
}
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
821822823824825826827828829830831832833834835836837838839840841842843
my
$faidx_tool
= Bio::Tools::Run::Samtools->new(
-command
=>
'faidx'
);
$faidx_tool
->run(
-fas
=>
$hg_bgz_path
)
or
die
"Could not index FASTA file $hg_bgz_path: "
.
$faidx_tool
->error_string;
}
=head2 Model input and output specification
Now we create a helper to call C<saved_model_cli> and called C<saved_model_cli scan> to ensure that the model is I/O-free for security reasons.
sub saved_model_cli {
my (@rest) = @_;
if( File::Which::which('saved_model_cli')) {
system(qw(saved_model_cli), @rest ) == 0
or die "Could not run saved_model_cli";
} else {
warn "saved_model_cli(): Install the tensorflow Python package to get the `saved_model_cli` command.\n";
return -1;
}
}
say "Checking with saved_model_cli scan:";
saved_model_cli( qw(scan),
qw(--dir) => $model_base,
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
950951952953954955956957958959960961962963964965966967968969970971972973974975the output C<human> which
has
the name C<StatefulPartitionedCall:0>.
=back
all of which are C<DT_FLOAT>.
Make note of the shapes that those take. Per the L<model description|https://tfhub.dev/deepmind/enformer/1> at TensorFlow Hub:
=over 2
The input sequence length is 393,216 with the prediction corresponding to 128 base pair windows for the center 114,688 base pairs. The input sequence is one hot encoded using the order of indices corresponding to 'ACGT' with N values being all zeros.
=back
The input shape C<(-1, 393216, 4)> thus represents dimensions C<[batch size] x [sequence length] x [one-hot encoding of ACGT]>.
The output shape C<(-1, 896, 5313)> represents dimensions C<[batch size] x [ predictions along 114,688 base pairs / 128 base pair windows ] x [ human target by index ]>. We can confirm this by doing some calculations:
my $model_central_base_pairs_length = 114_688; # bp
my $model_central_base_pair_window_size = 128; # bp / prediction
say "Number of predictions: ", $model_central_base_pairs_length / $model_central_base_pair_window_size;
B<STREAM (STDOUT)>:
Number of predictions: 896
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
110411051106110711081109111011111112111311141115111611171118111911201121112211231124
return
$outputs_t
[0];
};
undef
;
=head2 Encoding the data
The model specifies that the way to get a sequence of DNA bases into a C<TFTensor> is to use L<one-hot encoding|https://en.wikipedia.org/wiki/One-hot#Machine_learning_and_statistics> in the order C<ACGT>.
This means that the bases are represented as vectors of length 4:
| base | vector encoding |
|------|-----------------|
| A | C<[1 0 0 0]> |
| C | C<[0 1 0 0]> |
| G | C<[0 0 1 0]> |
| T | C<[0 0 0 1]> |
| N | C<[0 0 0 0]> |
We can achieve this encoding by creating a lookup table with a PDL ndarray. This could be done by creating a byte PDL ndarray of dimensions C<[ 256 4 ]> to directly look up the the numeric value of characters 0-255, but here we'll go with a smaller C...
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
121912201221122212231224122512261227122812291230123112321233123412351236123712381239sub
center {
my
$self
=
shift
;
my
$center
=
int
( (
$self
->start +
$self
->end ) / 2 );
my
$delta
= (
$self
->start +
$self
->end ) % 2;
return
$center
+
$delta
;
}
sub
resize {
my
(
$self
,
$width
) =
@_
;
my
$new_interval
=
$self
->clone;
my
$center
=
$self
->center;
my
$half
=
int
( (
$width
-1) / 2 );
my
$offset
= (
$width
-1) % 2;
$new_interval
->start(
$center
-
$half
-
$offset
);
$new_interval
->end(
$center
+
$half
);
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
1242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296
sub
_op_stringify {
sprintf
"%s:%s"
,
$_
[0]->seq_id //
"(no sequence)"
,
$_
[0]->to_FTstring }
}
#####
{
say
"Testing interval resizing:\n"
;
sub
_debug_resize {
my
(
$interval
,
$to
,
$msg
) =
@_
;
my
$resized_interval
=
$interval
->resize(
$to
);
die
"Wrong interval size for $interval --($to)--> $resized_interval"
unless
$resized_interval
->
length
==
$to
;
say
sprintf
"Interval: %s -> %s, length %2d : %s"
,
$interval
,
$resized_interval
,
$resized_interval
->
length
,
$msg
;
}
for
my
$interval_spec
( [4, 8], [5, 8], [5, 9], [6, 9]) {
my
(
$start
,
$end
) =
@$interval_spec
;
my
$test_interval
= Interval->new(
-seq_id
=>
'chr11'
,
-start
=>
$start
,
-end
=>
$end
);
say
sprintf
"Testing interval %s with length %d"
,
$test_interval
,
$test_interval
->
length
;
say
"-----"
;
for
(0..5) {
my
$base
=
$test_interval
->
length
;
my
$to
=
$base
+
$_
;
_debug_resize
$test_interval
,
$to
,
"$base -> $to (+ $_)"
;
}
say
""
;
}
}
undef
;
B<STREAM (STDOUT)>:
Testing interval resizing:
Testing interval chr11:4..8
with
length
5
-----
Interval: chr11:4..8 -> chr11:4..8,
length
5 : 5 -> 5 (+ 0)
Interval: chr11:4..8 -> chr11:3..8,
length
6 : 5 -> 6 (+ 1)
Interval: chr11:4..8 -> chr11:3..9,
length
7 : 5 -> 7 (+ 2)
Interval: chr11:4..8 -> chr11:2..9,
length
8 : 5 -> 8 (+ 3)
Interval: chr11:4..8 -> chr11:2..10,
length
9 : 5 -> 9 (+ 4)
Interval: chr11:4..8 -> chr11:1..10,
length
10 : 5 -> 10 (+ 5)
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
13681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453
say
"1 base: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 35_082_742 + 1,
-end
=> 35_082_742 + 1 ) );
say
"3 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 1,
-end
=> 1 )->resize(3) );
say
"5 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=>
$hg_db
->
length
(
'chr11'
),
-end
=>
$hg_db
->
length
(
'chr11'
) )->resize(5) );
say
"chr11 is of length "
,
$hg_db
->
length
(
'chr11'
);
say
"chr11 bases: "
, seq_info
extract_sequence(
$hg_db
,
Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 1,
-end
=>
$hg_db
->
length
(
'chr11'
) )->resize(
$hg_db
->
length
(
'chr11'
) ) );
}
B<STREAM (STDOUT)>:
Testing sequence extraction:
1 base: G (
length
1)
3 bases: NNN (
length
3)
5 bases: NNNNN (
length
5)
chr11 is of
length
135086622
chr11 bases: NNNNNNNNNN...NNNNNNNNNN (
length
135086622)
B<RESULT>:
1
Now we can
use
the same target interval that is used in the example notebook which recreates part of L<figure 1|https://www.nature.com/articles/s41592-021-01252-x/figures/1> from the Enformer paper.
my
$target_interval
= Interval->new(
-seq_id
=>
'chr11'
,
-start
=> 35_082_742 + 1,
# BioPerl is 1-based
-end
=> 35_197_430 );
say
"Target interval: $target_interval with length @{[ $target_interval->length ]}"
;
die
"Target interval is not $model_central_base_pairs_length bp long"
unless
$target_interval
->
length
==
$model_central_base_pairs_length
;
say
"Target sequence is "
, seq_info extract_sequence(
$hg_db
,
$target_interval
);
say
""
;
my
$resized_interval
=
$target_interval
->resize(
$model_sequence_length
);
say
"Resized interval: $resized_interval with length @{[ $resized_interval->length ]}"
;
die
"resize() is not working properly!"
unless
$resized_interval
->
length
==
$model_sequence_length
;
my
$seq
= extract_sequence(
$hg_db
,
$resized_interval
);
say
"Resized sequence is "
, seq_info(
$seq
);
B<STREAM (STDOUT)>:
Target interval: chr11:35082743..35197430
with
length
114688
Target sequence is GGTGGCAGCC...ATCTCCTTTT (
length
114688)
Resized interval: chr11:34943479..35336694
with
length
393216
Resized sequence is ACTAGTTCTA...GGCCCAAATC (
length
393216)
B<RESULT>:
1
To prepare the input we have to one-hot encode this resized sequence and give it a dummy dimension at the end to indicate that it is is a batch
with
a single sequence. Then we can turn the PDL ndarray into a C<TFTensor> and pass it to
our
prediction ...
my
$sequence_one_hot
= one_hot_dna(
$seq
)->dummy(-1);
say
$sequence_one_hot
->info;
undef
;
B<STREAM (STDOUT)>:
PDL: Float D [4,393216,1]
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
154815491550155115521553155415551556155715581559156015611562156315641565156615671568156915701571157215731574157515761577157815791580158115821583158415851586
$gp
->end_multi;
$gp
->
close
;
if
( IN_IPERL ) {
IPerl->png(
bytestream
=> path(
$plot_output_path
)->slurp_raw );
}
B<DISPLAY>:
=for html <span style="display:inline-block;margin-left:1em;"><p><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAMgCAIAAAA/et9qAAAgAElEQVR4nOzdd2AUVeIH8Ddb0jshBAIEpSo1GjoIpyAgCOqd3uGdoGBBUQQFRUVBRbkTf9gOBQucqFiwUhSSgJQYCCSBkJBAet1k...
=head2 Parts of the original notebook that fall outside the scope
In the orignal notebook, there are several more steps that have not been ported here:
=over
=item 1.
"Compute contribution scores":
This task requires implementing C<@tf.function> to compile gradients.
=item 2.
"Predict the effect of a genetic variant" and "Score multiple variants":
The first task is possible, but the second task requires loading a pre-processing pipeline for scikit-learn and unfortunately this pipeline is stored as a pickle file that is valid for an older version of scikit-learn (version 0.23.2) and as such its...
=back
# Some code that could be used for working with variants.
1 if <<'COMMENT';
use Bio::DB::HTS::VCF;
my $clinvar_tbi_path = "${clinvar_path}.tbi";
unless( -f $clinvar_tbi_path ) {
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod view on Meta::CPAN
1611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662
);
say
"Disk space usage: $total"
;
undef
;
B<STREAM (STDOUT)>:
Disk space usage: 4.66G
=head1 CPANFILE
requires 'AI::TensorFlow::Libtensorflow';
requires 'AI::TensorFlow::Libtensorflow::DataType';
requires 'Archive::Extract';
requires 'Bio::DB::HTS::Faidx';
requires 'Bio::Location::Simple';
requires 'Bio::Tools::Run::Samtools';
requires 'Data::Frame';
requires 'Data::Printer';
requires 'Data::Printer::Filter::PDL';
requires 'Devel::Timer';
requires 'Digest::file';
requires 'FFI::Platypus::Buffer';
requires 'FFI::Platypus::Memory';
requires 'File::Which';
requires 'Filesys::DiskUsage';
requires 'HTTP::Tiny';
requires 'IPC::Run';
requires 'List::Util';
requires 'PDL';
requires 'PDL::Graphics::Gnuplot';
requires 'Path::Tiny';
requires 'Syntax::Construct';
requires 'Text::Table::Tiny';
requires 'URI';
requires 'constant';
requires 'feature';
requires 'lib::projectroot';
requires 'overload';
requires 'parent';
requires 'strict';
requires 'utf8';
requires 'warnings';
=head1 AUTHOR
Zakariyya Mughal <zmughal@cpan.org>
=head1 COPYRIGHT AND LICENSE
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
103104105106107108109110111112113114115116117118119120121122123
image_size
=> [ 224, 224 ],
},
);
my
$model_name
=
'mobilenet_v2_100_224'
;
say
"Selected model: $model_name : $model_name_to_params{$model_name}{handle}"
;
my
$model_uri
= URI->new(
$model_name_to_params
{
$model_name
}{handle} );
$model_uri
->query_form(
'tf-hub-format'
=>
'compressed'
);
my
$model_base
=
substr
(
$model_uri
->path, 1 ) =~ s,/,_,gr;
my
$model_archive_path
=
"${model_base}.tar.gz"
;
my
$labels_uri
= URI->new(
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
);
my
$labels_path
= (
$labels_uri
->path_segments)[-1];
my
$http
= HTTP::Tiny->new;
for
my
$download
( [
$model_uri
=>
$model_archive_path
],
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279
alt
=>
$image_name
,
width
=>
'50%'
,
})
),
)
})
);
}
sub
imager_paste_center_pad {
my
(
$inner
,
$padded_sz
,
@rest
) =
@_
;
my
$outer
= Imager->new( List::Util::mesh( [
qw(xsize ysize)
],
$padded_sz
),
@rest
);
$outer
->paste(
left
=>
int
( (
$outer
->getwidth -
$inner
->getwidth ) / 2 ),
top
=>
int
( (
$outer
->getheight -
$inner
->getheight) / 2 ),
src
=>
$inner
,
);
$outer
;
}
sub
imager_scale_to {
my
(
$img
,
$image_size
) =
@_
;
my
$rescaled
=
$img
->scale(
List::Util::mesh( [
qw(xpixels ypixels)
],
$image_size
),
type
=>
'min'
,
qtype
=>
'mixing'
,
# 'mixing' seems to work better than 'normal'
);
}
sub
load_image_to_pdl {
my
(
$uri
,
$image_size
) =
@_
;
my
$http
= HTTP::Tiny->new;
my
$response
=
$http
->get(
$uri
);
die
"Could not fetch image from $uri"
unless
$response
->{success};
say
"Downloaded $uri"
;
my
$img
= Imager->new;
$img
->
read
(
data
=>
$response
->{content} );
my
$rescaled
= imager_scale_to(
$img
,
$image_size
);
say
sprintf
"Rescaled image from [ %d x %d ] to [ %d x %d ]"
,
$img
->getwidth,
$img
->getheight,
$rescaled
->getwidth,
$rescaled
->getheight;
my
$padded
= imager_paste_center_pad(
$rescaled
,
$image_size
,
# ARGB fits in 32-bits (uint32_t)
channels
=> 4
);
say
sprintf
"Padded to [ %d x %d ]"
,
$padded
->getwidth,
$padded
->getheight;
# Create PDL ndarray from Imager data in-memory.
my
$data
;
$padded
->
write
(
data
=> \
$data
,
type
=>
'raw'
)
or
die
"could not write "
.
$padded
->errstr;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
434435436437438439440441442443444445446447448449450451452453454=pod
=encoding UTF-8
=head1 NAME
AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model - Using TensorFlow to do image classification using a pre-trained model
=head1 SYNOPSIS
The following tutorial is based on the L<Image Classification with TensorFlow Hub notebook|https://github.com/tensorflow/docs/blob/master/site/en/hub/tutorials/image_classification.ipynb>. It uses a pre-trained model based on the I<MobileNet V2> arch...
Please look at the L<SECURITY note|https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md> regarding running models as models are programs. You can also used C<saved_model_cli scan> to check for L<security-sensitive "denylisted ops"|https:/...
If you would like to visualise a model, you can use L<Netron|https://github.com/lutzroeder/netron> on the C<.pb> file.
=head1 COLOPHON
The following document is either a POD file which can additionally be run as a Perl script or a Jupyter Notebook which can be run in L<IPerl|https://p3rl.org/Devel::IPerl> (viewable online at L<nbviewer|https://nbviewer.org/github/EntropyOrg/perl-AI-...
If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
585586587588589590591592593594595596597598599600601602603604605
Selected model: mobilenet_v2_100_224 : https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/5
B<RESULT>:
1
We download the model and labels to the current directory then extract the model to a folder
with
the name
given
in C<
$model_base
>.
my
$model_uri
= URI->new(
$model_name_to_params
{
$model_name
}{handle} );
$model_uri
->query_form(
'tf-hub-format'
=>
'compressed'
);
my
$model_base
=
substr
(
$model_uri
->path, 1 ) =~ s,/,_,gr;
my
$model_archive_path
=
"${model_base}.tar.gz"
;
my
$labels_uri
= URI->new(
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
);
my
$labels_path
= (
$labels_uri
->path_segments)[-1];
my
$http
= HTTP::Tiny->new;
for
my
$download
( [
$model_uri
=>
$model_archive_path
],
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
617618619620621622623624625626627628629630631632633634635636
my
$saved_model
= path(
$model_base
)->child(
'saved_model.pb'
);
say
"Saved model is in $saved_model"
if
-f
$saved_model
;
my
@labels
= path(
$labels_path
)->lines( {
chomp
=> 1 });
die
"Labels should have @{[ IMAGENET_LABEL_COUNT_WITH_BG ]} items"
unless
@labels
== IMAGENET_LABEL_COUNT_WITH_BG;
say
"Got labels: "
,
join
(
", "
, List::Util::head(5,
@labels
) ),
", etc."
;
B<STREAM (STDOUT)>:
Downloading https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/5?tf-hub-
format
=compressed to google_imagenet_mobilenet_v2_100_224_classification_5.tar.gz
Downloading https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt to ImageNetLabels.txt
Saved model is in google_imagenet_mobilenet_v2_100_224_classification_5/saved_model.pb
Got labels: background, tench, goldfish, great white shark, tiger shark, etc.
B<RESULT>:
1
=head2 Load the model and session
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
671672673674675676677678679680681682683684685686687688689690691692693
Method name is: tensorflow/serving/predict
B<RESULT>:
1
The above C<saved_model_cli> output shows that the model input is at C<serving_default_inputs:0> which means the operation named C<serving_default_inputs> at
index
C<0> and the output is at C<StatefulPartitionedCall:0> which means the operation named...
It also shows the type and shape of the C<TFTensor>s
for
those inputs and outputs. Together this is known as a signature.
For the C<input>, we have C<(-1, 224, 224, 3)> which is a L<common input image specification
for
TensorFlow Hub|https://www.tensorflow.org/hub/common_signatures/images
#input>. This is known as C<channels_last> (or C<NHWC>) layout where the TensorFlow...
For the C<output>, we have C<(-1, 1001)> which is C<[batch_size, num_classes]> where the elements are scores that the image received
for
that ImageNet class.
Now we can load the model from that folder
with
the tag set C<[
'serve'
]> by using the C<LoadFromSavedModel> constructor to create a C<::Graph> and a C<::Session>
for
that graph.
my
$opt
= AI::TensorFlow::Libtensorflow::SessionOptions->New;
my
$graph
= AI::TensorFlow::Libtensorflow::Graph->New;
my
$session
= AI::TensorFlow::Libtensorflow::Session->LoadFromSavedModel(
$opt
,
undef
,
$model_base
, \
@tags
,
$graph
,
undef
,
$s
);
AssertOK(
$s
);
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861
})
);
}
B<DISPLAY>:
=for html <span style="display:inline-block;margin-left:1em;"><p><table style="width: 100%"><tr><td><tt>apple</tt></td><td><a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Red_Apple.jpg"><img alt="apple" src="https://upload.wikimedia.org/...
=head2 Download the test images and transform them into suitable input data
We now fetch these images and prepare them to be the in the needed format by using C<Imager> to resize and add padding. Then we turn the C<Imager> data into a C<PDL> ndarray. Since the C<Imager> data is stored as 32-bits with 4 channels in the order ...
We then take all the PDL ndarrays and concatenate them. Again, note that the dimension lists for the PDL ndarray and the TFTensor are reversed.
sub imager_paste_center_pad {
my ($inner, $padded_sz, @rest) = @_;
my $outer = Imager->new( List::Util::mesh( [qw(xsize ysize)], $padded_sz ),
@rest
);
$outer->paste(
left => int( ($outer->getwidth - $inner->getwidth ) / 2 ),
top => int( ($outer->getheight - $inner->getheight) / 2 ),
src => $inner,
);
$outer;
}
sub imager_scale_to {
my ($img, $image_size) = @_;
my $rescaled = $img->scale(
List::Util::mesh( [qw(xpixels ypixels)], $image_size ),
type => 'min',
qtype => 'mixing', # 'mixing' seems to work better than 'normal'
);
}
sub load_image_to_pdl {
my ($uri, $image_size) = @_;
my $http = HTTP::Tiny->new;
my $response = $http->get( $uri );
die "Could not fetch image from $uri" unless $response->{success};
say "Downloaded $uri";
my $img = Imager->new;
$img->read( data => $response->{content} );
my $rescaled = imager_scale_to($img, $image_size);
say sprintf "Rescaled image from [ %d x %d ] to [ %d x %d ]",
$img->getwidth, $img->getheight,
$rescaled->getwidth, $rescaled->getheight;
my $padded = imager_paste_center_pad($rescaled, $image_size,
# ARGB fits in 32-bits (uint32_t)
channels => 4
);
say sprintf "Padded to [ %d x %d ]", $padded->getwidth, $padded->getheight;
# Create PDL ndarray from Imager data in-memory.
my $data;
$padded->write( data => \$data, type => 'raw' )
or die "could not write ". $padded->errstr;
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
9969979989991000100110021003100410051006100710081009101010111012101310141015B<STREAM (STDERR)>:
=for html <span style="display:inline-block;margin-left:1em;"><pre style="display: block"><code><span style="color: #cc66cc;">AI::TensorFlow::Libtensorflow::Tensor</span><span style=""> </span><span style="color: #33ccff;">{</span><span style="">
</span><span style="color: #6666cc;">Type </span><span style=""> </span><span style="color: #cc66cc;">FLOAT</span><span style="">
</span><span style="color: #6666cc;">Dims </span><span style=""> </span><span style="color: #33ccff;">[</span><span style=""> </span><span style="color: #ff6633;">1</span><span style=""> </span><span style="color: #ff6633;">1001</span><...
</span><span style="color: #6666cc;">NumDims </span><span style=""> </span><span style="color: #ff6633;">2</span><span style="">
</span><span style="color: #6666cc;">ElementCount </span><span style=""> </span><span style="color: #ff6633;">1001</span><span style="">
</span><span style="color: #33ccff;">}</span><span style="">
</span></code></pre></span>
Then we send the batched image data. The returned scores need to by normalised using the L<softmax function|https://en.wikipedia.org/wiki/Softmax_function> with the following formula (taken from Wikipedia):
$$ {\displaystyle \sigma (\mathbf {z} )I<{i}={\frac {e^{z>{i}}}{\sum I<{j=1}^{K}e^{z>{j}}}}\ \ {\text{ for }}i=1,\dotsc ,K{\text{ and }}\mathbf {z} =(zI<{1},\dotsc ,z>{K})\in \mathbb {R} ^{K}.} $$
my $output_pdl_batched = FloatTFTensorToPDL($RunSession->($session, $t));
my $softmax = sub { ( map $_/sumover($_)->dummy(0), exp($_[0]) )[0] };
my $probabilities_batched = $softmax->($output_pdl_batched);
p $probabilities_batched;
B<STREAM (STDERR)>:
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
108910901091109210931094109510961097109810991100110111021103110411051106110711081109
$probabilities_batched
->at(
$label_index
,
$batch_idx
),
) ];
}
say
generate_table(
rows
=> [
$header
,
@rows
],
header_row
=> 1 );
"\n"
;
}
}
B<DISPLAY>:
=for html <span style="display:inline-block;margin-left:1em;"><p><table style="width: 100%"><tr><td><tt>apple</tt></td><td><a href="https://upload.wikimedia.org/wikipedia/commons/1/15/Red_Apple.jpg"><img alt="apple" src="https://upload.wikimedia.org/...
my $p_approx_batched = $probabilities_batched->sumover->approx(1, 1e-5);
p $p_approx_batched;
say "All probabilities sum up to approximately 1" if $p_approx_batched->all->sclr;
B<STREAM (STDOUT)>:
All probabilities sum up to approximately 1
B<STREAM (STDERR)>:
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubMobileNetV2Model.pod view on Meta::CPAN
114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187
my
@solid_channel_uris
= (
'https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Solid_blue.svg/480px-Solid_blue.svg.png'
,
);
undef
;
=head1 CPANFILE
requires 'AI::TensorFlow::Libtensorflow';
requires 'AI::TensorFlow::Libtensorflow::DataType';
requires 'Archive::Extract';
requires 'Data::Printer';
requires 'Data::Printer::Filter::PDL';
requires 'FFI::Platypus::Buffer';
requires 'FFI::Platypus::Memory';
requires 'File::Which';
requires 'Filesys::DiskUsage';
requires 'HTML::Tiny';
requires 'HTTP::Tiny';
requires 'Imager';
requires 'List::Util';
requires 'PDL';
requires 'PDL::GSL::RNG';
requires 'Path::Tiny';
requires 'Syntax::Construct';
requires 'Text::Table::Tiny';
requires 'URI';
requires 'constant';
requires 'feature';
requires 'lib::projectroot';
requires 'strict';
requires 'utf8';
requires 'warnings';
=head1 AUTHOR
Zakariyya Mughal <zmughal@cpan.org>
=head1 COPYRIGHT AND LICENSE
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod view on Meta::CPAN
15161718192021222324252627282930313233343536373839404142434445464748495051525354555657This provides a tour of C<libtensorflow> to help get started
with
using the
library.
=head1 CONVENTIONS
The library uses UpperCamelCase naming convention for method names in order to
match the underlying C library (for compatibility with future API changes) and
to make translating code from C easier as this is a low-level API.
As such, constructors for objects that correspond to C<libtensorflow> data
structures are typically called C<New>. For example, a new
L<AI::TensorFlow::Libtensorflow::Status> object can be created as follows
use AI::TensorFlow::Libtensorflow::Status;
my $status = AI::TensorFlow::Libtensorflow::Status->New;
ok defined $status, 'Created new Status';
These C<libtensorflow> data structures use L<destructors|perlobj/Destructors> where necessary.
=head1 OBJECT TYPES
=over 4
=item L<AI::TensorFlow::Libtensorflow::Status>
Used for error-handling. Many methods take this as the final argument which is
then checked after the method call to ensure that it completed successfully.
=item L<AI::TensorFlow::Libtensorflow::Tensor>, L<AI::TensorFlow::Libtensorflow::DataType>
A C<TFTensor> is a multi-dimensional data structure that stores the data for inputs and outputs.
Each element has the same data type
which is defined by L<AI::TensorFlow::Libtensorflow::DataType>
thus a C<TFTensor> is considered to be "homogeneous data structure".
See L<Introduction to Tensors|https://www.tensorflow.org/guide/tensor> for more.
=item L<AI::TensorFlow::Libtensorflow::OperationDescription>, L<AI::TensorFlow::Libtensorflow::Operation>
An operation is a function that has inputs and outputs. It has a user-defined
name (such as C<MyAdder>) and library-defined type (such as C<AddN>).
L<AI::TensorFlow::Libtensorflow::OperationDescription> is used to build an
lib/AI/TensorFlow/Libtensorflow/Manual/Quickstart.pod view on Meta::CPAN
7879808182838485868788899091929394959697The object types in L</OBJECT TYPES> are used in the following tutorials:
=over 4
=item L<InferenceUsingTFHubMobileNetV2Model|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model>: image classification tutorial
This tutorial demonstrates using a pre-trained SavedModel and creating a L<AI::TensorFlow::Libtensorflow::Session> with the
L<LoadFromSavedModel|AI::TensorFlow::Libtensorflow::Session/LoadFromSavedModel>
method. It also demonstrates how to prepare image data for use as an input C<TFTensor>.
=item L<InferenceUsingTFHubEnformerGeneExprPredModel|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubEnformerGeneExprPredModel>: gene expression prediction tutorial
This tutorial builds on L<InferenceUsingTFHubMobileNetV2Model|AI::TensorFlow::Libtensorflow::Manual::Notebook::InferenceUsingTFHubMobileNetV2Model>.
It shows how to convert a pre-trained SavedModel from one that does not have a
usable signature to a new model that does. It also demonstrates how to prepare
genomic data for use as an input C<TFTensor>.
=back
=head1 DOCKER IMAGES
lib/AI/TensorFlow/Libtensorflow/OperationDescription.pm view on Meta::CPAN
23242526272829303132333435363738394041424344);
$ffi
->load_custom_type(PackableArrayRef(
'BoolArrayRef'
,
pack_type
=>
'C'
)
=>
'tf_attr_bool_list'
,
);
$ffi
->attach( [
'NewOperation'
=>
'New'
] => [
arg
'TF_Graph'
=>
'graph'
,
arg
'string'
=>
'op_type'
,
arg
'string'
=>
'oper_name'
,
] =>
'TF_OperationDescription'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
$xs
->(
@rest
);
});
$ffi
->attach( [
'NewOperationLocked'
=>
'NewLocked'
] => [
arg
'TF_Graph'
=>
'graph'
,
arg
'string'
=>
'op_type'
,
arg
'string'
=>
'oper_name'
,
] =>
'TF_OperationDescription'
);
$ffi
->attach(
'SetDevice'
=> [
arg
'TF_OperationDescription'
=>
'desc'
,
lib/AI/TensorFlow/Libtensorflow/Session.pm view on Meta::CPAN
14151617181920212223242526272829303132333435363738394041424344454647484950515253my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
$ffi
->attach( [
'NewSession'
=>
'New'
] =>
[
arg
'TF_Graph'
=>
'graph'
,
arg
'TF_SessionOptions'
=>
'opt'
,
arg
'TF_Status'
=>
'status'
,
],
=>
'TF_Session'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
return
$xs
->(
@rest
);
});
$ffi
->attach( [
'LoadSessionFromSavedModel'
=>
'LoadFromSavedModel'
] => [
arg
TF_SessionOptions
=>
'session_options'
,
arg
opaque
=> {
id
=>
'run_options'
,
ffi_type
=>
'TF_Buffer'
,
maybe
=> 1 },
arg
string
=>
'export_dir'
,
arg
'string[]'
=>
'tags'
,
arg
int
=>
'tags_len'
,
arg
TF_Graph
=>
'graph'
,
arg
opaque
=> {
id
=>
'meta_graph_def'
,
ffi_type
=>
'TF_Buffer'
,
maybe
=> 1 },
arg
TF_Status
=>
'status'
,
] =>
'TF_Session'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
my
(
$session_options
,
$run_options
,
$export_dir
,
$tags
,
$graph
,
$meta_graph_def
,
$status
) =
@rest
;
$run_options
=
$ffi
->cast(
'TF_Buffer'
,
'opaque'
,
$run_options
)
if
defined
$run_options
;
$meta_graph_def
=
$ffi
->cast(
'TF_Buffer'
,
'opaque'
,
$meta_graph_def
)
if
defined
$meta_graph_def
;
my
$tags_len
=
@$tags
;
$xs
->(
lib/AI/TensorFlow/Libtensorflow/Session.pm view on Meta::CPAN
306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337=head2 LoadFromSavedModel
B<C API>: L<< C<TF_LoadSessionFromSavedModel>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_LoadSessionFromSavedModel >>
=head1 METHODS
=head2 Run
Run the graph associated with the session starting with the supplied
C<$inputs> with corresponding values in C<$input_values>.
The values at the outputs given by C<$outputs> will be placed in
C<$output_values>.
B<Parameters>
=over 4
=item Maybe[TFBuffer] $run_options
Optional C<TFBuffer> containing serialized representation of a `RunOptions` protocol buffer.
=item ArrayRef[TFOutput] $inputs
Inputs to set.
=item ArrayRef[TFTensor] $input_values
Values to assign to the inputs given by C<$inputs>.
=item ArrayRef[TFOutput] $outputs
lib/AI/TensorFlow/Libtensorflow/Session.pm view on Meta::CPAN
342343344345346347348349350351352353354355356357358359360361362Reference to where the output
values
for
C<
$outputs
> will be placed.
=item ArrayRef[TFOperation] $target_opers
TODO
=item Maybe[TFBuffer] $run_metadata
Optional empty C<TFBuffer> which will be updated to contain a serialized
representation of a `RunMetadata` protocol buffer.
=item L<TFStatus|AI::TensorFlow::Libtensorflow::Lib::Types/TFStatus> $status
Status
=back
B<C API>: L<< C<TF_SessionRun>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_SessionRun >>
=head2 PRunSetup
lib/AI/TensorFlow/Libtensorflow/TFLibrary.pm view on Meta::CPAN
456789101112131415161718192021222324use
strict;
use
warnings;
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi
->attach( [
'LoadLibrary'
=>
'LoadLibrary'
] => [
arg
string
=>
'library_filename'
,
arg
TF_Status
=>
'status'
,
] =>
'TF_Library'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
$xs
->(
@rest
);
} );
$ffi
->attach( [
'GetOpList'
=>
'GetOpList'
] => [
arg
TF_Library
=>
'lib_handle'
] =>
'TF_Buffer'
);
$ffi
->attach( [
'DeleteLibraryHandle'
=>
'DESTROY'
] => [
arg
TF_Library
=>
'lib_handle'
] =>
'void'
);
lib/AI/TensorFlow/Libtensorflow/TFLibrary.pm view on Meta::CPAN
585960616263646566676869707172737475767778
my
$buf
= AI::TensorFlow::Libtensorflow::TFLibrary->GetAllOpList();
cmp_ok
$buf
->
length
,
'>'
, 0,
'Got OpList buffer'
;
B<Returns>
=over 4
=item L<TFBuffer|AI::TensorFlow::Libtensorflow::Lib::Types/TFBuffer>
Contains a serialized C<OpList> proto for ops registered in this address space.
=back
B<C API>: L<< C<TF_GetAllOpList>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_GetAllOpList >>
=head1 METHODS
=head2 GetOpList
B<C API>: L<< C<TF_GetOpList>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_GetOpList >>
lib/AI/TensorFlow/Libtensorflow/Tensor.pm view on Meta::CPAN
62636465666768697071727374757677787980818283# C: TF_AllocateTensor
#
# Constructor
$ffi
->attach( [
'AllocateTensor'
,
'Allocate'
],
[
arg
'TF_DataType'
=>
'dtype'
,
arg
'tf_dims_buffer'
=> [
qw(dims num_dims)
],
arg
'size_t'
=>
'len'
,
],
=>
'TF_Tensor'
=>
sub
{
my
(
$xs
,
$class
,
@rest
) =
@_
;
my
(
$dtype
,
$dims
,
$len
) =
@rest
;
if
( !
defined
$len
) {
$len
= product(
$dtype
->Size,
@$dims
);
}
my
$obj
=
$xs
->(
$dtype
,
$dims
,
$len
);
}
);
$ffi
->attach( [
'DeleteTensor'
=>
'DESTROY'
],
[ arg
'TF_Tensor'
=>
't'
]
=>
'void'
lib/AI/TensorFlow/Libtensorflow/Tensor.pm view on Meta::CPAN
87888990919293949596979899100101102103104105106107108
if
(
exists
$self
->{_deallocator_closure} ) {
$self
->{_deallocator_closure}->unstick;
}
}
);
$ffi
->attach( [
'TensorData'
=>
'Data'
],
[ arg
'TF_Tensor'
=>
'self'
],
=>
'opaque'
=>
sub
{
my
(
$xs
,
@rest
) =
@_
;
my
(
$self
) =
@rest
;
my
$data_p
=
$xs
->(
@rest
);
window(
my
$buffer
,
$data_p
,
$self
->ByteSize);
\
$buffer
;
}
);
$ffi
->attach( [
'TensorByteSize'
=>
'ByteSize'
],
[ arg
'TF_Tensor'
=>
'self'
],
=>
'size_t'
);
lib/AI/TensorFlow/Libtensorflow/Tensor.pm view on Meta::CPAN
262263264265266267268269270271272273274275276277278279280281282283284285286287288289=head1 DESCRIPTION
A C<TFTensor> is an object that contains values of a
single type arranged in an n-dimensional array.
For types other than L<STRING|AI::TensorFlow::Libtensorflow::DataType/STRING>,
the data buffer is stored in L<row major order|https://en.wikipedia.org/wiki/Row-_and_column-major_order>.
Of note, this is different from the definition of I<tensor> used in
mathematics and physics which can also be represented as a
multi-dimensional array in some cases, but these tensors are
defined not by the representation but by how they transform. For
=over 4
Lim, L.-H. (2021). L<Tensors in computations|https://galton.uchicago.edu/~lekheng/work/acta.pdf>.
Acta Numerica, 30, 555–764. Cambridge University Press.
=back
=head1 CONSTRUCTORS
=head2 New
=over 2
maint/cpanfile-git view on Meta::CPAN
123456requires
'Alien::Libtensorflow'
,
branch
=>
'master'
;
requires
'PDL'
,
branch
=>
'master'
;
maint/inc/Pod/Elemental/Transformer/TF_Sig.pm view on Meta::CPAN
123456789101112package
Pod::Elemental::Transformer::TF_Sig;
# ABSTRACT: TensorFlow signatures
use
Moose;
extends
'https://metacpan.org/pod/Pod::Elemental::Transformer::List">Pod::Elemental::Transformer::List'
;
maint/inc/Pod/Elemental/Transformer/TF_Sig.pm view on Meta::CPAN
69707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110
unshift
@replacements
,
$prefix
if
defined
$prefix
;
@replacements
;
};
sub
__paras_for_num_marker {
die
"only support definition lists"
}
sub
__paras_for_bul_marker {
die
"only support definition lists"
}
around
__paras_for_def_marker
=>
sub
{
my
(
$orig
,
$self
,
$rest
) =
@_
;
my
$ffi
= AI::TensorFlow::Libtensorflow::Lib->ffi;
my
$type_library
=
'AI::TensorFlow::Libtensorflow::Lib::Types'
;
my
@types
= (
$rest
);
my
$process_type
=
sub
{
my
(
$type
) =
@_
;
my
$new_type_text
=
$type
;
my
$info
;
if
(
eval
{
$info
->{TT} = t(
$type
); 1 }
||
eval
{
$info
->{FFI} =
$ffi
->type_meta(
$type
); 1 } ) {
if
(
$info
->{TT} &&
$info
->{TT}->library eq
$type_library
) {
$new_type_text
=
"L<$type|$type_library/$type>"
;
}
}
else
{
die
"Could not find type constraint or FFI::Platypus type $type"
;
}
$new_type_text
;
};
my
$type_re
=
qr{
\A (?<ws>\s*) (?<type> \w+)
}
xm;
$rest
=~ s[
$type_re
]{$+{ws} .
$process_type
->($+{type}) }ge;
my
@replacements
=
$orig
->(
$self
,
$rest
);
@replacements
;
};
1;
maint/process-notebook.pl view on Meta::CPAN
67686970717273747576777879808182838485868788## Edit to NAME
perl -0777 -pi -e
's/(=head1 NAME\n+)$ENV{SRC_BASENAME}/\1$ENV{PODNAME}/'
$DST
## Edit to local section link (Markdown::Pod does not yet recognise this).
perl -pi -E
's,\QL<CPANFILE|#CPANFILE>\E,L<CPANFILE|/CPANFILE>,g'
$DST
## Add
## =head1 CPANFILE
##
## requires '...';
## requires '...';
scan-perl-prereqs-nqlite --cpanfile
$DST
| perl -M5
';print qq|=head1 CPANFILE\n\n|'
-plE
'$_ = q| | . $_;'
| sponge -a
$DST
;
## Check output (if on TTY)
if
[ -t 0 ]; then
perldoc
$DST
;
fi
## Check and run script in the directory of the original (e.g., to get data
## files).
perl -c
$DST
t/05_session_run.t view on Meta::CPAN
252627282930313233343536373839404142434445die
"Can not init input op"
unless
$input_op
;
use
PDL;
my
$p_data
= float(
-0.4809832, -0.3770838, 0.1743573, 0.7720509, -0.4064746, 0.0116595, 0.0051413, 0.9135732, 0.7197526, -0.0400658, 0.1180671, -0.6829428,
-0.4810135, -0.3772099, 0.1745346, 0.7719303, -0.4066443, 0.0114614, 0.0051195, 0.9135003, 0.7196983, -0.0400035, 0.1178188, -0.6830465,
-0.4809143, -0.3773398, 0.1746384, 0.7719052, -0.4067171, 0.0111654, 0.0054433, 0.9134697, 0.7192584, -0.0399981, 0.1177435, -0.6835230,
-0.4808300, -0.3774327, 0.1748246, 0.7718700, -0.4070232, 0.0109549, 0.0059128, 0.9133330, 0.7188759, -0.0398740, 0.1181437, -0.6838635,
-0.4807833, -0.3775733, 0.1748378, 0.7718275, -0.4073670, 0.0107582, 0.0062978, 0.9131795, 0.7187147, -0.0394935, 0.1184392, -0.6840039,
);
$p_data
->reshape(1,5,12);
my
$input_tensor
= AI::TensorFlow::Libtensorflow::Tensor->New(
FLOAT, [
$p_data
->dims ],
$p_data
->get_dataref,
sub
{
undef
$p_data
}
);
my
$output_op
= Output->New({
oper
=>
$graph
->OperationByName(
'output_node0'
),
index
=> 0 } );
t/upstream/CAPI/003_Tensor.t view on Meta::CPAN
31323334353637383940414243444546
#
# It should not be called in this case because aligned_alloc() is used.
ok !
$deallocator_called
,
'deallocator not called yet'
;
is
$t
->Type,
'FLOAT'
,
'FLOAT TF_Tensor'
;
is
$t
->NumDims, 2,
'2D TF_Tensor'
;
is
$t
->Dim(0),
$dims
[0],
'dim 0'
;
is
$t
->Dim(1),
$dims
[1],
'dim 1'
;
is
$t
->ByteSize,
$num_bytes
,
'bytes'
;
is scalar_to_pointer(${
$t
->Data}), scalar_to_pointer(
$values
),
'data at same pointer address'
;
undef
$t
;
ok
$deallocator_called
,
'deallocated'
;
};
done_testing;
t/upstream/CAPI/018_ImportGraphDef.t view on Meta::CPAN
333435363738394041424344454647484950515253ok
$graph
->OperationByName(
'scalar'
),
'got scalar operation from graph'
;
TF_Utils::Neg(
$oper
,
$graph
,
$s
);
TF_Utils::AssertStatusOK(
$s
);
ok
$graph
->OperationByName(
'neg'
),
'got neg operation from graph'
;
note
'Export to a GraphDef.'
;
my
$graph_def
= AI::TensorFlow::Libtensorflow::Buffer->New;
$graph
->ToGraphDef(
$graph_def
,
$s
);
TF_Utils::AssertStatusOK(
$s
);
note
'Import it, with a prefix, in a fresh graph.'
;
undef
$graph
;
$graph
= AI::TensorFlow::Libtensorflow::Graph->New;
my
$opts
= AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts
->SetPrefix(
'imported'
);
$graph
->ImportGraphDef(
$graph_def
,
$opts
,
$s
);
TF_Utils::AssertStatusOK(
$s
);
ok
my
$scalar
=
$graph
->OperationByName(
'imported/scalar'
),
'imported/scalar'
;
ok
my
$feed
=
$graph
->OperationByName(
'imported/feed'
),
'imported/feed'
;
ok
my
$neg
=
$graph
->OperationByName(
'imported/neg'
),
'imported/neg'
;
t/upstream/CAPI/018_ImportGraphDef.t view on Meta::CPAN
84858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142operation, into the same graph.|;
undef
$opts
;
$opts
= AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts
->SetPrefix(
'imported2'
);
$opts
->AddInputMapping(
'scalar'
, 0,
$TFOutput
->coerce([
$scalar
=>0]));
$opts
->AddReturnOutput(
'feed'
, 0);
$opts
->AddReturnOutput(
'scalar'
, 0);
is
$opts
->NumReturnOutputs, 2,
'num return outputs'
;
$opts
->AddReturnOperation(
'scalar'
);
is
$opts
->NumReturnOperations, 1,
'num return operations'
;
my
$results
=
$graph
->ImportGraphDefWithResults(
$graph_def
,
$opts
,
$s
);
TF_Utils::AssertStatusOK(
$s
);
ok
my
$scalar2
=
$graph
->OperationByName(
"imported2/scalar"
),
"imported2/scalar"
;
ok
my
$feed2
=
$graph
->OperationByName(
"imported2/feed"
),
"imported2/feed"
;
ok
my
$neg2
=
$graph
->OperationByName(
"imported2/neg"
),
"imported2/neg"
;
note
'Check input mapping'
;
$neg_input
=
$neg
->Input(
$TFInput
->coerce( [
$neg
=> 0 ]) );
is
$neg_input
, object {
call
sub
{
shift
->oper->Name } =>
$scalar
->Name;
call
index
=> 0;
},
'neg input'
;
note
'Check return outputs'
;
my
$return_outputs
=
$results
->ReturnOutputs;
is
$return_outputs
, array {
item
0
=> object {
call
sub
{
shift
->oper->Name } =>
$feed2
->Name;
call
index
=> 0;
};
item
1
=> object {
# remapped
call
sub
{
shift
->oper->Name } =>
$scalar
->Name;
call
index
=> 0;
};
end;
},
'return outputs'
;
note
'Check return operation'
;
my
$return_opers
=
$results
->ReturnOperations;
is
$return_opers
, array {
item
0
=> object {
# not remapped
call
Name
=>
$scalar2
->Name;
};
end;
},
'return opers'
;
undef
$results
;
note
'Import again, with control dependencies, into the same graph.'
;
undef
$opts
;
$opts
= AI::TensorFlow::Libtensorflow::ImportGraphDefOptions->New;
$opts
->SetPrefix(
"imported3"
);
$opts
->AddControlDependency(
$feed
);
$opts
->AddControlDependency(
$feed2
);
$graph
->ImportGraphDef(
$graph_def
,
$opts
,
$s
);
TF_Utils::AssertStatusOK(
$s
);