AI-TensorFlow-Libtensorflow

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

0.0.7 2023-10-05 01:27:42-0400

  Features

   - Add object detection demo. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/23>.

  Refactoring

   - Add timer to the notebooks to time the inference steps. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/17>.

  Documentation

   - Add information about installing GPU version of `libtensorflow` either on
     the "bare metal" or with Docker GPU runtime support. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/18>.

  Build changes

   - Add Dockerfile that builds GPU version of the omnibus notebook image.
     Update the CI to additionally build the GPU Docker image. See <https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/pull/16>.

0.0.6 2023-01-30 15:22:04-0500

  - Documentation

      - Fix NAME for Notebook POD.

0.0.5 2023-01-30 11:46:31-0500

  - Features

      - Docker images with dependencies for notebooks.
      - Support for running notebooks in Binder.

  - Documentation

      - Add manual index and quickstart guide.
      - Add InferenceUsingTFHubEnformerGeneExprPredModel tutorial.

0.0.4 2022-12-21 15:57:53-0500

  - Features

      - Add Data::Printer and stringification support for several classes.
      - Add `::TFLibrary` class. Move `GetAllOpList()` method there.

  - Documentation

      - Add InferenceUsingTFHubMobileNetV2Model tutorial.

0.0.3 2022-12-15 10:46:52-0500

  - Features

      - Add more testing of basic API. Complete port of "(CAPI, *)" tests
        from upstream `tensorflow/c/c_api_test.cc`.

0.0.2 2022-11-28 14:33:33-0500

  - Features

      - Explicit support for minimum Perl v5.14.

0.0.1 2022-11-25 11:43:37-0500

  Features

    - First release.

LICENSE  view on Meta::CPAN

This software is Copyright (c) 2022 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright 2022 Auto-Parallel Technologies, Inc

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

META.json  view on Meta::CPAN

{
   "abstract" : "Bindings for Libtensorflow deep learning library",
   "author" : [
      "Zakariyya Mughal <zmughal@cpan.org>"
   ],
   "dynamic_config" : 0,
   "generated_by" : "Dist::Zilla version 6.030, CPAN::Meta::Converter version 2.150010",
   "license" : [
      "apache_2_0"
   ],
   "meta-spec" : {
      "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec",
      "version" : 2
   },
   "name" : "AI-TensorFlow-Libtensorflow",
   "no_index" : {
      "directory" : [
         "eg",
         "examples",
         "inc",
         "share",
         "t",
         "xt",
         "maint"
      ]
   },
   "prereqs" : {
      "configure" : {
         "requires" : {
            "ExtUtils::MakeMaker" : "0",
            "perl" : "5.014"
         }
      },
      "develop" : {
         "requires" : {
            "Moose" : "0",
            "Moose::Role" : "0",
            "Pod::Simple::Search" : "0",
            "Test::More" : "0.88",
            "Test::Perl::Critic" : "0",
            "Test::Pod::LinkCheck::Lite" : "0",
            "Test::Pod::Snippets" : "0",
            "Test::Pod::Snippets::Parser" : "0",
            "With::Roles" : "0"
         },
         "suggests" : {
            "CLI::Osprey" : "0",
            "Data::Printer" : "0",
            "File::Find::Rule" : "0",
            "Function::Parameters" : "0",
            "Hook::LexWrap" : "0",
            "List::SomeUtils" : "0",
            "Module::Runtime" : "0",
            "Mu" : "0",
            "Path::Tiny" : "0",
            "Sort::Key::Multi" : "0",
            "Sub::Uplevel" : "0",
            "Syntax::Construct" : "0",
            "Types::Path::Tiny" : "0"
         }
      },
      "runtime" : {
         "requires" : {
            "Alien::Libtensorflow" : "0",
            "Class::Tiny" : "0",
            "Const::Exporter" : "0",
            "Const::Fast" : "0",
            "Devel::StrictMode" : "0",
            "Exporter::Tiny" : "0",
            "FFI::C" : "0.12",
            "FFI::C::ArrayDef" : "0",
            "FFI::C::StructDef" : "0",
            "FFI::CheckLib" : "0.28",
            "FFI::Platypus" : "2.00",
            "FFI::Platypus::API" : "0",
            "FFI::Platypus::Buffer" : "0",
            "FFI::Platypus::Closure" : "0",
            "FFI::Platypus::Memory" : "0",
            "FFI::Platypus::Record" : "0",
            "FFI::Platypus::Type::Enum" : "0",
            "FFI::Platypus::Type::PtrObject" : "0",
            "Feature::Compat::Defer" : "0",
            "List::Util" : "0",
            "Module::Runtime" : "0",
            "Package::Variant" : "0",
            "Sub::Delete" : "0",
            "Sub::Quote" : "0",
            "Type::Library" : "0.008",
            "Type::Utils" : "0",
            "Types::Common" : "0",
            "Types::Standard" : "0",
            "base" : "0",
            "constant" : "0",
            "feature" : "0",
            "namespace::autoclean" : "0",
            "overload" : "0",
            "perl" : "5.014",
            "strict" : "0",
            "warnings" : "0"
         },
         "suggests" : {
            "Data::Printer" : "0",
            "PDL" : "0"
         }
      },
      "test" : {
         "requires" : {
            "Data::Dumper" : "0",
            "PDL" : "0",
            "PDL::Core" : "0",
            "Path::Tiny" : "0",
            "Test2::V0" : "0",
            "Test::More" : "0",
            "aliased" : "0",
            "lib" : "0",
            "perl" : "5.014"
         }
      }
   },
   "release_status" : "stable",
   "resources" : {
      "homepage" : "https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow",
      "repository" : {
         "type" : "git",
         "url" : "https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow.git",
         "web" : "https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow"
      }
   },
   "version" : "0.0.7",
   "x_generated_by_perl" : "v5.26.1",
   "x_serialization_backend" : "Cpanel::JSON::XS version 4.37",
   "x_spdx_expression" : "Apache-2.0"
}

META.yml  view on Meta::CPAN

---
abstract: 'Bindings for Libtensorflow deep learning library'
author:
  - 'Zakariyya Mughal <zmughal@cpan.org>'
build_requires:
  Data::Dumper: '0'
  PDL: '0'
  PDL::Core: '0'
  Path::Tiny: '0'
  Test2::V0: '0'
  Test::More: '0'
  aliased: '0'
  lib: '0'
  perl: '5.014'
configure_requires:
  ExtUtils::MakeMaker: '0'
  perl: '5.014'
dynamic_config: 0
generated_by: 'Dist::Zilla version 6.030, CPAN::Meta::Converter version 2.150010'
license: apache
meta-spec:
  url: http://module-build.sourceforge.net/META-spec-v1.4.html
  version: '1.4'
name: AI-TensorFlow-Libtensorflow
no_index:
  directory:
    - eg
    - examples
    - inc
    - share
    - t
    - xt
    - maint
requires:
  Alien::Libtensorflow: '0'
  Class::Tiny: '0'
  Const::Exporter: '0'
  Const::Fast: '0'
  Devel::StrictMode: '0'
  Exporter::Tiny: '0'
  FFI::C: '0.12'
  FFI::C::ArrayDef: '0'
  FFI::C::StructDef: '0'
  FFI::CheckLib: '0.28'
  FFI::Platypus: '2.00'
  FFI::Platypus::API: '0'
  FFI::Platypus::Buffer: '0'
  FFI::Platypus::Closure: '0'
  FFI::Platypus::Memory: '0'
  FFI::Platypus::Record: '0'
  FFI::Platypus::Type::Enum: '0'
  FFI::Platypus::Type::PtrObject: '0'
  Feature::Compat::Defer: '0'
  List::Util: '0'
  Module::Runtime: '0'
  Package::Variant: '0'
  Sub::Delete: '0'
  Sub::Quote: '0'
  Type::Library: '0.008'
  Type::Utils: '0'
  Types::Common: '0'
  Types::Standard: '0'
  base: '0'
  constant: '0'
  feature: '0'
  namespace::autoclean: '0'
  overload: '0'
  perl: '5.014'
  strict: '0'
  warnings: '0'
resources:
  homepage: https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow
  repository: https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow.git
version: 0.0.7
x_generated_by_perl: v5.26.1
x_serialization_backend: 'YAML::Tiny version 1.74'
x_spdx_expression: Apache-2.0

Makefile.PL  view on Meta::CPAN

# This file was automatically generated by Dist::Zilla::Plugin::MakeMaker v6.030.
use strict;
use warnings;

use 5.014;

use ExtUtils::MakeMaker;

my %WriteMakefileArgs = (
  "ABSTRACT" => "Bindings for Libtensorflow deep learning library",
  "AUTHOR" => "Zakariyya Mughal <zmughal\@cpan.org>",
  "CONFIGURE_REQUIRES" => {
    "ExtUtils::MakeMaker" => 0
  },
  "DISTNAME" => "AI-TensorFlow-Libtensorflow",
  "LICENSE" => "apache",
  "MIN_PERL_VERSION" => "5.014",
  "NAME" => "AI::TensorFlow::Libtensorflow",
  "PREREQ_PM" => {
    "Alien::Libtensorflow" => 0,
    "Class::Tiny" => 0,
    "Const::Exporter" => 0,
    "Const::Fast" => 0,
    "Devel::StrictMode" => 0,
    "Exporter::Tiny" => 0,
    "FFI::C" => "0.12",
    "FFI::C::ArrayDef" => 0,
    "FFI::C::StructDef" => 0,
    "FFI::CheckLib" => "0.28",
    "FFI::Platypus" => "2.00",
    "FFI::Platypus::API" => 0,
    "FFI::Platypus::Buffer" => 0,
    "FFI::Platypus::Closure" => 0,
    "FFI::Platypus::Memory" => 0,
    "FFI::Platypus::Record" => 0,
    "FFI::Platypus::Type::Enum" => 0,
    "FFI::Platypus::Type::PtrObject" => 0,
    "Feature::Compat::Defer" => 0,
    "List::Util" => 0,
    "Module::Runtime" => 0,
    "Package::Variant" => 0,
    "Sub::Delete" => 0,
    "Sub::Quote" => 0,
    "Type::Library" => "0.008",
    "Type::Utils" => 0,
    "Types::Common" => 0,
    "Types::Standard" => 0,
    "base" => 0,
    "constant" => 0,
    "feature" => 0,
    "namespace::autoclean" => 0,
    "overload" => 0,
    "strict" => 0,
    "warnings" => 0
  },
  "TEST_REQUIRES" => {
    "Data::Dumper" => 0,
    "PDL" => 0,
    "PDL::Core" => 0,
    "Path::Tiny" => 0,
    "Test2::V0" => 0,
    "Test::More" => 0,
    "aliased" => 0,
    "lib" => 0
  },
  "VERSION" => "0.0.7",
  "test" => {
    "TESTS" => "t/*.t t/AI/TensorFlow/*.t t/upstream/CAPI/*.t"
  }
);


my %FallbackPrereqs = (
  "Alien::Libtensorflow" => 0,
  "Class::Tiny" => 0,
  "Const::Exporter" => 0,
  "Const::Fast" => 0,
  "Data::Dumper" => 0,
  "Devel::StrictMode" => 0,
  "Exporter::Tiny" => 0,
  "FFI::C" => "0.12",
  "FFI::C::ArrayDef" => 0,
  "FFI::C::StructDef" => 0,
  "FFI::CheckLib" => "0.28",
  "FFI::Platypus" => "2.00",
  "FFI::Platypus::API" => 0,
  "FFI::Platypus::Buffer" => 0,
  "FFI::Platypus::Closure" => 0,
  "FFI::Platypus::Memory" => 0,
  "FFI::Platypus::Record" => 0,
  "FFI::Platypus::Type::Enum" => 0,
  "FFI::Platypus::Type::PtrObject" => 0,
  "Feature::Compat::Defer" => 0,
  "List::Util" => 0,
  "Module::Runtime" => 0,
  "PDL" => 0,
  "PDL::Core" => 0,
  "Package::Variant" => 0,
  "Path::Tiny" => 0,
  "Sub::Delete" => 0,
  "Sub::Quote" => 0,
  "Test2::V0" => 0,
  "Test::More" => 0,
  "Type::Library" => "0.008",
  "Type::Utils" => 0,
  "Types::Common" => 0,
  "Types::Standard" => 0,
  "aliased" => 0,
  "base" => 0,
  "constant" => 0,
  "feature" => 0,
  "lib" => 0,
  "namespace::autoclean" => 0,
  "overload" => 0,
  "strict" => 0,
  "warnings" => 0
);


unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) {
  delete $WriteMakefileArgs{TEST_REQUIRES};
  delete $WriteMakefileArgs{BUILD_REQUIRES};
  $WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs;
}

delete $WriteMakefileArgs{CONFIGURE_REQUIRES}
  unless eval { ExtUtils::MakeMaker->VERSION(6.52) };

WriteMakefile(%WriteMakefileArgs);

README  view on Meta::CPAN

This archive contains the distribution AI-TensorFlow-Libtensorflow,
version 0.0.7:

  Bindings for Libtensorflow deep learning library

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004


This README file was generated by Dist::Zilla::Plugin::Readme v6.030.

lib/AI/TensorFlow/Libtensorflow.pm  view on Meta::CPAN

use AI::TensorFlow::Libtensorflow::Eager::Context;

use FFI::C;

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
FFI::C->ffi($ffi);

$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

sub new {
	my ($class) = @_;
	bless {}, $class;
}

$ffi->attach( 'Version' => [], 'string' );#}}}

1;

__END__

=pod

lib/AI/TensorFlow/Libtensorflow.pm  view on Meta::CPAN


=head1 NAME

AI::TensorFlow::Libtensorflow - Bindings for Libtensorflow deep learning library

=for html <a href="https://mybinder.org/v2/gh/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/master"><img src="https://mybinder.org/badge_logo.svg" alt="Binder" /></a>
<a href="https://quay.io/repository/entropyorg/perl-ai-tensorflow-libtensorflow"><img src="https://img.shields.io/badge/quay.io-images-red.svg" alt="quay.io images" /></a>

=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow' => 'Libtensorflow';

=head1 DESCRIPTION

The C<libtensorflow> library provides low-level C bindings
for TensorFlow with a stable ABI.

For more detailed information about this library including how to get started,
see L<AI::TensorFlow::Libtensorflow::Manual>.

=head1 CLASS METHODS

=head2 Version

  my $version = Libtensorflow->Version();
  like $version, qr/(\d|\.)+/, 'Got version';

B<Returns>

=over 4

=item Str

Version number for the C<libtensorflow> library.

=back

lib/AI/TensorFlow/Libtensorflow.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/ApiDefMap.pm  view on Meta::CPAN

$AI::TensorFlow::Libtensorflow::ApiDefMap::VERSION = '0.0.7';
use strict;
use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'NewApiDefMap' => 'New' ] => [
	arg 'TF_Buffer' => 'op_list_buffer',
	arg 'TF_Status' => 'status',
] => 'TF_ApiDefMap' => sub {
	my ($xs, $class, @rest) = @_;
	$xs->(@rest);
});

$ffi->attach( ['DeleteApiDefMap' => 'DESTROY'] => [
	arg 'TF_ApiDefMap' => 'apimap'
] => 'void');

$ffi->attach( [ 'ApiDefMapPut' => 'Put' ] => [
	arg 'TF_ApiDefMap' => 'api_def_map',
	arg 'tf_text_buffer' => [qw(text text_len)],
	arg 'TF_Status' => 'status',
] => 'void' );

$ffi->attach( ['ApiDefMapGet' => 'Get' ] => [
	arg 'TF_ApiDefMap' => 'api_def_map',
	arg 'tf_text_buffer'  => [qw(name name_len)],
	arg 'TF_Status' => 'status',
] => 'TF_Buffer');

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::ApiDefMap - Maps Operation to API description

=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow::ApiDefMap' => 'ApiDefMap';

=head1 CONSTRUCTORS

=head2 New

  use AI::TensorFlow::Libtensorflow;
  use AI::TensorFlow::Libtensorflow::Status;

  my $map = ApiDefMap->New(
    AI::TensorFlow::Libtensorflow::TFLibrary->GetAllOpList,
    my $status = AI::TensorFlow::Libtensorflow::Status->New
  );
  ok $map, 'Created ApiDefMap';

B<C API>: L<< C<TF_NewApiDefMap>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_NewApiDefMap >>

=head1 METHODS

=head2 Put

B<C API>: L<< C<TF_ApiDefMapPut>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_ApiDefMapPut >>

=head2 Get

=over 2

C<<<
Get($name, $status)
>>>

=back

  my $api_def_buf = $map->Get(
    'NoOp',
    my $status = AI::TensorFlow::Libtensorflow::Status->New
  );

  cmp_ok $api_def_buf->length, '>', 0, 'Got ApiDef buffer for NoOp operation';

B<Parameters>

=over 4

=item Str $name

Name of the operation to retrieve.

=item L<TFStatus|AI::TensorFlow::Libtensorflow::Lib::Types/TFStatus> $status

lib/AI/TensorFlow/Libtensorflow/ApiDefMap.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

use strict;
use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);
use FFI::C;
FFI::C->ffi($ffi);
$ffi->load_custom_type('AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalarRef'
	=> 'tf_buffer_buffer'
);

use FFI::Platypus::Buffer;
use FFI::Platypus::Memory;





FFI::C->struct( 'TF_Buffer' => [
	data => 'opaque',
	length => 'size_t',
	_data_deallocator => 'opaque', # data_deallocator_t
	# this does not work?
	#_data_deallocator => 'data_deallocator_t',
]);
use Sub::Delete;
delete_sub 'DESTROY';

sub data_deallocator {
	my ($self, $coderef) = shift;

	return $self->{_data_deallocator_closure} unless $coderef;

	my $closure = $ffi->closure( $coderef );

	$closure->sticky;
	$self->{_data_deallocator_closure} = $closure;

	my $opaque = $ffi->cast('data_deallocator_t', 'opaque', $closure);
	$self->_data_deallocator( $opaque );
}


$ffi->attach( [ 'NewBuffer' => 'New' ] => [] => 'TF_Buffer' );

$ffi->attach( [ 'NewBufferFromString' => 'NewFromString' ] => [
	arg 'tf_buffer_buffer' => [qw(proto proto_len)]
] => 'TF_Buffer' => sub {
	my ($xs, $class, @rest) = @_;
	$xs->(@rest);
});


$ffi->attach( [ 'DeleteBuffer' => 'DESTROY' ] => [ 'TF_Buffer' ], 'void' );

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::Buffer - Buffer that holds pointer to data with length

=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow::Buffer' => 'Buffer';

=head1 DESCRIPTION

C<TFBuffer> is a data structure that stores a pointer to a block of data, the
length of the data, and optionally a deallocator function for memory
management.

This structure is typically used in C<libtensorflow> to store the data for a
serialized protocol buffer.

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

=head2 New

=over 2

C<<<
New()
>>>

=back

  my $buffer = Buffer->New();

  ok $buffer, 'created an empty buffer';
  is $buffer->length, 0, 'with a length of 0';

Create an empty buffer. Useful for passing as an output parameter.

B<Returns>

=over 4

=item L<TFBuffer|AI::TensorFlow::Libtensorflow::Lib::Types/TFBuffer>

Empty buffer.

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN


C<<<
NewFromString( $proto )
>>>

=back

Makes a copy of the input and sets an appropriate deallocator. Useful for
passing in read-only, input protobufs.

  my $data = 'bytes';
  my $buffer = Buffer->NewFromString(\$data);
  ok $buffer, 'create buffer from string';
  is $buffer->length, bytes::length($data), 'same length as string';

B<Parameters>

=over 4

=item ScalarRef[Bytes] $proto

=back

B<Returns>

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/DataType.pm  view on Meta::CPAN

use Const::Exporter;

use Devel::StrictMode;
use Types::Common qw(Int Str);

use namespace::autoclean;

# enum TF_DataType
# From <tensorflow/c/tf_datatype.h>
my %_ENUM_DTYPE = (
	FLOAT      =>  1,
	DOUBLE     =>  2,
	INT32      =>  3, #// Int32 tensors are always in 'host' memory.
	UINT8      =>  4,
	INT16      =>  5,
	INT8       =>  6,
	STRING     =>  7,
	COMPLEX64  =>  8, #// Single-precision complex
	# NOTE Stubbing out this duplicate so that no new code uses this.
	#COMPLEX    =>  8, #// Old identifier kept for API backwards compatibility
	INT64      =>  9,
	BOOL       => 10,
	QINT8      => 11, #// Quantized int8
	QUINT8     => 12, #// Quantized uint8
	QINT32     => 13, #// Quantized int32
	BFLOAT16   => 14, #// Float32 truncated to 16 bits.  Only for cast ops.
	QINT16     => 15, #// Quantized int16
	QUINT16    => 16, #// Quantized uint16
	UINT16     => 17,
	COMPLEX128 => 18, #// Double-precision complex
	HALF       => 19,
	RESOURCE   => 20,
	VARIANT    => 21,
	UINT32     => 22,
	UINT64     => 23,
);
my %_REV_ENUM_DTYPE = reverse %_ENUM_DTYPE;
if( STRICT ) { # ASSERT
	die "Duplicate values for \%_ENUM_DTYPE" unless keys %_ENUM_DTYPE == keys %_REV_ENUM_DTYPE
}

my %_DTYPES;
Const::Exporter->import(
	dtypes => [
		do {
			%_DTYPES = map {
				$_ => bless \do {
					my $value = $_ENUM_DTYPE{$_};
				}, __PACKAGE__;
			} keys %_ENUM_DTYPE;
		},
		'@DTYPES' => [ sort { $$a <=> $$b } values %_DTYPES ],
	]
);
use namespace::autoclean;

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_for_object('DataType'));

$ffi->type('object(AI::TensorFlow::Libtensorflow::DataType,int)', 'TF_DataType');

$ffi->attach( 'Size' => ['TF_DataType'] => 'size_t' );


use overload
	'==' => '_op_num_equals',
	'eq'  => '_op_eq',
	'""'  => '_op_stringify';

sub _op_num_equals {
	my ($a, $b, $swap) = @_;
	my $int_a = ref $a ? 0+$$a : 0+$a;
	my $int_b = ref $b ? 0+$$b : 0+$b;
	if( STRICT ) { # ASSERT
		Int->assert_valid($int_a);
		Int->assert_valid($int_b);
	}
	!$swap
		? $int_a == $int_b
		: $int_b == $int_b
}

sub _op_eq {
	my ($a, $b, $swap) = @_;
	my $str_a = "$a";
	my $str_b = "$b";
	if( STRICT ) { # ASSERT
		Str->assert_valid($str_a);
		Str->assert_valid($str_b);
	}
	!$swap
		?  $str_a eq $str_b
		:  $str_b eq $str_a;
}

sub _op_stringify { $_REV_ENUM_DTYPE{ 0 + ${$_[0]}} }

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::DataType - Datatype enum

=head1 SYNOPSIS

  use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT @DTYPES);
  use List::Util qw(max);

  my $dtype = FLOAT;
  is FLOAT->Size, 4, 'FLOAT is 4 bytes large';
  is max(map { $_->Size } @DTYPES), 16,
    'Largest type has sizeof() == 16 bytes';

=head1 DESCRIPTION

Enum representing native data types used inside of containers such as
L<TFTensor|AI::TensorFlow::Libtensorflow::Lib::Types/TFTensor>.

=head1 CONSTANTS

=head2 STRING

lib/AI/TensorFlow/Libtensorflow/DataType.pm  view on Meta::CPAN

Handle to a mutable resource.

=head2 VARIANT

Variant.

=head1 METHODS

=head2 Size

  my $size = $dtype->Size();

B<Returns>

=over 4

=item size_t

The number of bytes used for the DataType C<$dtype>. Returns C<0> for variable
length types such as C<STRING> or for invalid types.

=back

B<C API>: L<< C<TF_DataTypeSize>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_DataTypeSize >>

=head1 OPERATORS

=head2 C<< == >>

Numeric equality of the underlying enum integer value.

  use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT);
  cmp_ok FLOAT, '==', FLOAT, 'Compare FLOAT objects numerically';
  cmp_ok FLOAT, '==', 1    , 'FLOAT enumeration is internally 1';

=head2 C<< eq >>

Compare string equality against type name.

  use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT);
  cmp_ok FLOAT, 'eq', 'FLOAT', 'Compare FLOAT object to string';

=head2 C<< "" >>

Stringification to the name of the enumerated type name (e.g., FLOAT, DOUBLE).

  use AI::TensorFlow::Libtensorflow::DataType qw(DOUBLE);
  is "@{[ DOUBLE ]}", 'DOUBLE', 'Stringifies';

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/DeviceList.pm  view on Meta::CPAN

$AI::TensorFlow::Libtensorflow::DeviceList::VERSION = '0.0.7';
use strict;
use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'DeleteDeviceList' => 'DESTROY' ] => [
	arg TF_DeviceList => 'list',
] => 'void' );

$ffi->attach( [ 'DeviceListCount' => 'Count' ] => [
	arg TF_DeviceList => 'list',
] => 'int' );

my %methods = (
	Name        => 'string',
	Type        => 'string',
	MemoryBytes => 'int64_t',
	Incarnation => 'uint64_t',
);
for my $method (keys %methods) {
	$ffi->attach( [ "DeviceList${method}" => $method ] => [
		arg TF_DeviceList => 'list',
		arg int => 'index',
		arg TF_Status => 'status'
	] => $methods{$method} );
}

### From tensorflow/core/framework/types.cc
my %DEVICE_TYPES = (
	DEFAULT => "DEFAULT",
	CPU => "CPU",
	GPU => "GPU",
	TPU => "TPU",
	TPU_SYSTEM => "TPU_SYSTEM",
);

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/DeviceList.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Eager/Context.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Eager::Context;
# ABSTRACT: Eager context
$AI::TensorFlow::Libtensorflow::Eager::Context::VERSION = '0.0.7';
use strict;
use warnings;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'NewContext' => 'New' ] => [
	arg TFE_ContextOptions => 'opts',
	arg TF_Status => 'status'
] => 'TFE_Context' => sub {
	my ($xs, $class, @rest) = @_;
	$xs->(@rest);
} );

__END__

=pod

=encoding UTF-8

=head1 NAME

lib/AI/TensorFlow/Libtensorflow/Eager/Context.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Eager/ContextOptions.pm  view on Meta::CPAN

use strict;
use warnings;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'NewContextOptions' => 'New' ] => [
] => 'TFE_ContextOptions' );

$ffi->attach( [ 'DeleteContextOptions' => 'DESTROY' ] => [
	arg TFE_ContextOptions => 'options'
] => 'void' );


1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Eager/ContextOptions.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Graph.pm  view on Meta::CPAN

use AI::TensorFlow::Libtensorflow::Buffer;
use AI::TensorFlow::Libtensorflow::Output;
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'NewGraph' => 'New' ] => [] => 'TF_Graph' );

$ffi->attach( [ 'DeleteGraph' => 'DESTROY' ] => [ arg 'TF_Graph' => 'self' ], 'void' );

$ffi->attach( [ 'GraphImportGraphDef'  => 'ImportGraphDef'  ] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Buffer' => 'graph_def',
	arg 'TF_ImportGraphDefOptions' => 'options',
	arg 'TF_Status' => 'status',
] => 'void' );

$ffi->attach( [ 'GraphImportGraphDefWithResults' => 'ImportGraphDefWithResults' ] => [
    arg TF_Graph => 'graph',
    arg TF_Buffer => 'graph_def',
    arg TF_ImportGraphDefOptions => 'options',
    arg TF_Status => 'status',
] => 'TF_ImportGraphDefResults');

$ffi->attach( [ 'GraphImportGraphDefWithReturnOutputs' => 'ImportGraphDefWithReturnOutputs' ] => [
    arg TF_Graph => 'graph',
    arg TF_Buffer => 'graph_def',
    arg TF_ImportGraphDefOptions => 'options',
    arg TF_Output_struct_array => 'return_outputs',
    arg int => 'num_return_outputs',
    arg TF_Status => 'status',
] => 'void' => sub {
	my ($xs, $graph, $graph_def, $options, $status) = @_;
	my $num_return_outputs = $options->NumReturnOutputs;
	return [] if $num_return_outputs == 0;

	my $return_outputs = AI::TensorFlow::Libtensorflow::Output->_adef->create( $num_return_outputs );
	$xs->($graph, $graph_def, $options,
		$return_outputs, $num_return_outputs,
		$status);
	return AI::TensorFlow::Libtensorflow::Output->_from_array( $return_outputs );
});

$ffi->attach( [ 'GraphOperationByName' => 'OperationByName' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'string'   => 'oper_name',
] => 'TF_Operation' );

$ffi->attach( [ 'GraphSetTensorShape' => 'SetTensorShape' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Output' => 'output',
	arg 'tf_dims_buffer' => [qw(dims num_dims)],
	arg 'TF_Status' => 'status',
] => 'void');

$ffi->attach( ['GraphGetTensorShape' => 'GetTensorShape'] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Output' => 'output',
	arg 'tf_dims_buffer' => [qw(dims num_dims)],
	arg 'TF_Status' => 'status',
] => 'void' => sub {
	my ($xs, @rest) = @_;
	my ($graph, $output, $status) = @rest;
	my $dims = [ (0)x($graph->GetTensorNumDims($output, $status)) ];
	$xs->($graph, $output, $dims, $status);
	return $dims;
});

$ffi->attach( [ 'GraphGetTensorNumDims' => 'GetTensorNumDims' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Output' => 'output',
	arg 'TF_Status' => 'status',
] => 'int');

$ffi->attach( [ 'GraphNextOperation' => 'NextOperation' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'size_t*'  => 'pos',
] => 'TF_Operation');

$ffi->attach( [ 'UpdateEdge' => 'UpdateEdge' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Output' => 'new_src',
	arg 'TF_Input'  => 'dst',
	arg 'TF_Status' => 'status',
] => 'void');

$ffi->attach([ 'GraphToGraphDef' => 'ToGraphDef' ] => [
	arg 'TF_Graph' => 'graph',
	arg 'TF_Buffer' => 'output_graph_def',
	arg 'TF_Status' => 'status',
] => 'void');

$ffi->attach( [ 'GraphGetOpDef' => 'GetOpDef' ] => [
	arg TF_Graph => 'graph',
	arg string => 'op_name',
	arg TF_Buffer => 'output_op_def',
	arg TF_Status => 'status',
] => 'void');

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::Graph - A TensorFlow computation, represented as a dataflow graph

=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow::Graph' => 'Graph';

=head1 DESCRIPTION

=head1 CONSTRUCTORS

=head2 New

=over 2

C<<<
New()
>>>

=back

  my $graph = Graph->New;
  ok $graph, 'created graph';

B<Returns>

=over 4

=item L<TFGraph|AI::TensorFlow::Libtensorflow::Lib::Types/TFGraph>

An empty graph.

=back

lib/AI/TensorFlow/Libtensorflow/Graph.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/ImportGraphDefOptions.pm  view on Meta::CPAN

use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'NewImportGraphDefOptions' => 'New' ] => [] => 'TF_ImportGraphDefOptions' );

$ffi->attach( [ 'DeleteImportGraphDefOptions' => 'DESTROY' ] => [
	arg 'TF_ImportGraphDefOptions' => 'self',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefOptionsSetPrefix' => 'SetPrefix' ] => [
	arg 'TF_ImportGraphDefOptions' => 'opts',
	arg 'string' => 'prefix',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefOptionsAddInputMapping' => 'AddInputMapping' ] => [
	arg 'TF_ImportGraphDefOptions' => 'opts',
	arg 'string' => 'src_name',
	arg 'int' => 'src_index',
	arg 'TF_Output' => 'dst',
] => 'void');

$ffi->attach( [ 'ImportGraphDefOptionsAddReturnOutput' => 'AddReturnOutput' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
	arg string => 'oper_name',
	arg int => 'index',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefOptionsNumReturnOutputs' => 'NumReturnOutputs' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
] => 'int');

$ffi->attach( [ 'ImportGraphDefOptionsAddReturnOperation' => 'AddReturnOperation' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
	arg string => 'oper_name',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefOptionsNumReturnOperations' => 'NumReturnOperations' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
] => 'int' );

$ffi->attach( [ 'ImportGraphDefOptionsAddControlDependency' => 'AddControlDependency' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
	arg TF_Operation => 'oper',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefOptionsRemapControlDependency' => 'RemapControlDependency' ] => [
	arg TF_ImportGraphDefOptions => 'opts',
	arg string => 'src_name',
	arg TF_Operation => 'dst',
] => 'void' );

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/ImportGraphDefOptions.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/ImportGraphDefResults.pm  view on Meta::CPAN

use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);
use FFI::Platypus::Buffer qw(buffer_to_scalar window);
use List::Util ();

my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

$ffi->attach( [ 'DeleteImportGraphDefResults' => 'DESTROY' ] => [
	arg TF_ImportGraphDefResults => 'results',
] => 'void' );

$ffi->attach( [ 'ImportGraphDefResultsReturnOutputs' => 'ReturnOutputs' ] => [
	arg TF_ImportGraphDefResults => 'results',
	arg 'int*' => 'num_outputs',
	arg 'opaque*' => { id => 'outputs', type => 'TF_Output_struct_array*' },
] => 'void' => sub {
	my ($xs, $results) = @_;
	my $num_outputs;
	my $outputs_array = undef;
	$xs->($results, \$num_outputs, \$outputs_array);
	return [] if $num_outputs == 0;

	my $sizeof_output = $ffi->sizeof('TF_Output');
	window(my $outputs_packed, $outputs_array, $sizeof_output * $num_outputs );
	# due to unpack, these are copies (no longer owned by $results)
	my @outputs = map bless(\$_, "AI::TensorFlow::Libtensorflow::Output"),
		unpack "(a${sizeof_output})*", $outputs_packed;
	return \@outputs;
});

$ffi->attach( [ 'ImportGraphDefResultsReturnOperations' => 'ReturnOperations' ] => [
	arg TF_ImportGraphDefResults => 'results',
	arg 'int*' => 'num_opers',
	arg 'opaque*' => { id => 'opers', type => 'TF_Operation_array*' },
] => 'void' => sub {
	my ($xs, $results) = @_;
	my $num_opers;
	my $opers_array = undef;
	$xs->($results, \$num_opers, \$opers_array);
	return [] if $num_opers == 0;

	my $opers_array_base_packed = buffer_to_scalar($opers_array,
		$ffi->sizeof('opaque') * $num_opers );
	my @opers = map {
		$ffi->cast('opaque', 'TF_Operation', $_ )
	} unpack "(@{[ AI::TensorFlow::Libtensorflow::Lib::_pointer_incantation ]})*", $opers_array_base_packed;
	return \@opers;
} );

$ffi->attach( [ 'ImportGraphDefResultsMissingUnusedInputMappings' => 'MissingUnusedInputMappings' ] => [
    arg TF_ImportGraphDefResults => 'results',
    arg 'int*' => 'num_missing_unused_input_mappings',
    arg 'opaque*' => { id => 'src_names', ctype => 'const char***' },
    arg 'opaque*' => { id => 'src_indexes', ctype => 'int**' },
] => 'void' => sub {
	my ($xs, $results) = @_;
	my $num_missing_unused_input_mappings;
	my $src_names;
	my $src_indexes;
	$xs->($results,
		\$num_missing_unused_input_mappings,
		\$src_names, \$src_indexes
	);
	my $src_names_str   = $ffi->cast('opaque',
		"string[$num_missing_unused_input_mappings]", $src_names);
	my $src_indexes_int = $ffi->cast('opaque',
		"int[$num_missing_unused_input_mappings]", $src_indexes);
	return [ List::Util::zip($src_names_str, $src_indexes_int) ];
});

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/ImportGraphDefResults.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Input.pm  view on Meta::CPAN

use warnings;
use namespace::autoclean;
use FFI::Platypus::Record;
use AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::RecordArrayRef;

use AI::TensorFlow::Libtensorflow::Lib;
my $ffi = AI::TensorFlow::Libtensorflow::Lib->ffi;
$ffi->mangler(AI::TensorFlow::Libtensorflow::Lib->mangler_default);

record_layout_1($ffi,
	'opaque' => '_oper',   # 8 (on 64-bit)
	'int'    => '_index',  # 4

	$ffi->sizeof('opaque') == 8 ? (
		'char[4]' => ':',
	) : (),
);
$ffi->type('record(AI::TensorFlow::Libtensorflow::Input)', 'TF_Input');

sub New {
	my ($class, $args) = @_;

	my $record = $class->new({
		_oper => $ffi->cast( 'TF_Operation', 'opaque', delete $args->{oper} ),
		_index => delete $args->{index},
	});
}

sub oper  { $ffi->cast('opaque', 'TF_Operation', $_[0]->_oper ) }
sub index { $_[0]->_index }

use FFI::C::ArrayDef;
use FFI::C::StructDef;
my $sdef = FFI::C::StructDef->new($ffi,
	name     => 'TF_Input_struct',
	members  => [
		_oper  => 'opaque',
		_index => 'int',
		__ignore => 'char[4]',
	],
);
my $adef = FFI::C::ArrayDef->new($ffi,
       name => 'TF_Input_struct_array',
       members => [ 'TF_Input_struct' ]
);
sub _adef { $adef; }
sub _as_array {
	my $class = shift;
	my $output = $class->_adef->create(0 + @_);
	for my $idx (0..@_-1) {
		next unless defined $_[$idx];
		$class->_copy_to_other( $_[$idx], $output->[$idx] );
	}
	$output;
}
sub _from_array {
	my ($class, $array) = @_;
	[
		map {
			my $record = $class->new;
			$class->_copy_to_other($array->[$_], $record);
			$record;
		} 0..$array->count-1
	]
}
sub _copy_to_other {
	my ($class, $this, $that) = @_;
       $that->_oper ($this->_oper);
       $that->_index($this->_index);
}

$ffi->load_custom_type(
	RecordArrayRef( 'InputArrayPtr',
		record_module => __PACKAGE__, with_size => 0,
	),
	=> 'TF_Input_array');
$ffi->load_custom_type(
	RecordArrayRef( 'InputArrayPtrSz',
		record_module => __PACKAGE__, with_size => 1,
	),
	=> 'TF_Input_array_sz');

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

lib/AI/TensorFlow/Libtensorflow/Input.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib.pm  view on Meta::CPAN

use Alien::Libtensorflow;
use FFI::Platypus;
use AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::PackableArrayRef;
use AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::PackableMaybeArrayRef;
use AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalar;

use base 'Exporter::Tiny';
our @EXPORT_OK = qw(arg);

sub lib {
	$ENV{AI_TENSORFLOW_LIBTENSORFLOW_LIB_DLL}
	// find_lib_or_die(
		lib => 'tensorflow',
		symbol => ['TF_Version'],
		alien => ['Alien::Libtensorflow'] );
}

sub ffi {
	state $ffi;
	$ffi ||= do {
		my $ffi = FFI::Platypus->new( api => 2 );
		$ffi->lib( __PACKAGE__->lib );

		$ffi->load_custom_type('::PointerSizeBuffer' => 'tf_config_proto_buffer');
		$ffi->load_custom_type('::PointerSizeBuffer' => 'tf_tensor_shape_proto_buffer');
		$ffi->load_custom_type('::PointerSizeBuffer' => 'tf_attr_value_proto_buffer');

		$ffi->load_custom_type('AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalar'
			=> 'tf_text_buffer');

		$ffi->load_custom_type( PackableMaybeArrayRef( 'DimsBuffer', pack_type => 'q' )
			=> 'tf_dims_buffer'
		);


		$ffi->type('object(AI::TensorFlow::Libtensorflow::SessionOptions)' => 'TF_SessionOptions');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::Graph)' => 'TF_Graph');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::OperationDescription)'
			=> 'TF_OperationDescription');

		$ffi->load_custom_type('::PtrObject', 'TF_Operation' => 'AI::TensorFlow::Libtensorflow::Operation');

		$ffi->type('opaque' => 'TF_Function');

		$ffi->type('opaque' => 'TF_FunctionOptions');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::ImportGraphDefOptions)' => 'TF_ImportGraphDefOptions');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::ImportGraphDefResults)' => 'TF_ImportGraphDefResults');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::Session)' => 'TF_Session');

		$ffi->type('opaque' => 'TF_DeprecatedSession');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::DeviceList)' => 'TF_DeviceList');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::TFLibrary)' => 'TF_Library');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::ApiDefMap)' => 'TF_ApiDefMap');

		$ffi->type('opaque' => 'TF_Server');



		$ffi->type('opaque' => 'TF_CheckpointReader');

		$ffi->type('opaque' => 'TF_AttrBuilder');

		$ffi->type('opaque' => 'TF_ShapeAndType');

		$ffi->type('opaque' => 'TF_ShapeAndTypeList');



		$ffi->type('opaque' => 'TF_WritableFileHandle');

		$ffi->type('opaque' => 'TF_StringStream');

		$ffi->type('opaque' => 'TF_Thread');


		$ffi->type('opaque' => 'TF_KernelBuilder');

		$ffi->type('opaque' => 'TF_OpKernelConstruction');

		$ffi->type('opaque' => 'TF_OpKernelContext');


		$ffi->type('opaque' => 'TF_VariableInputLockHolder');

		$ffi->type('opaque' => 'TF_CoordinationServiceAgent');


		$ffi->type('opaque' => 'TF_Shape');


		$ffi->type('object(AI::TensorFlow::Libtensorflow::Status)' => 'TF_Status');


		$ffi->load_custom_type('::PtrObject', 'TF_Tensor' => 'AI::TensorFlow::Libtensorflow::Tensor');


		$ffi->load_custom_type('::PtrObject', 'TF_TString' => 'AI::TensorFlow::Libtensorflow::TString');


		$ffi->type('object(AI::TensorFlow::Libtensorflow::Eager::ContextOptions)', 'TFE_ContextOptions');

		$ffi->type('object(AI::TensorFlow::Libtensorflow::Eager::Context)', 'TFE_Context');



		## Callbacks for deallocation
		# For TF_Buffer
		$ffi->type('(opaque,size_t)->void'        => 'data_deallocator_t');
		# For TF_Tensor
		$ffi->type('(opaque,size_t,opaque)->void' => 'tensor_deallocator_t');

		$ffi;
	};
}

sub mangler_default {
	my $target = (caller)[0];
	my $prefix = 'TF';
	if( $target =~ /::Eager::/ ) {
		$prefix = 'TFE';
	}
	sub {
		my ($name) = @_;
		"${prefix}_$name";
	}
}

sub mangler_for_object {
	my ($class, $object_name) = @_;
	sub {
		my ($name) = @_;

		# constructor and destructors
		return "TF_New${object_name}" if $name eq 'New';
		return "TF_Delete${object_name}" if $name eq 'Delete';

		return "TF_${object_name}$name";
	};
}

sub arg(@) {
	my $arg = AI::TensorFlow::Libtensorflow::Lib::_Arg->new(
		type => shift,
		id => shift,
	);
	return $arg, @_;
}

# from FFI::Platypus::Type::StringArray
use constant _pointer_incantation =>
  $^O eq 'MSWin32' && do { require Config; $Config::Config{archname} =~ /MSWin32-x64/ }
  ? 'Q'
  : 'L!';
use constant _size_of_pointer => FFI::Platypus->new( api => 2 )->sizeof('opaque');
use constant _pointer_buffer => "P" . _size_of_pointer;

package # hide from PAUSE
  AI::TensorFlow::Libtensorflow::Lib::_Arg {

use Class::Tiny qw(type id);

use overload
	q{""} => 'stringify',
	eq => 'eq';

sub stringify { $_[0]->type }

sub eq {
	my ($self, $other, $swap) = @_;
	"$self" eq "$other";
}

}



1;

__END__

lib/AI/TensorFlow/Libtensorflow/Lib.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrPtrLenSizeArrayRefScalar.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrPtrLenSizeArrayRefScalar;
# ABSTRACT: Type to hold string list as void** strings, size_t* lengths, int num_items
$AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrPtrLenSizeArrayRefScalar::VERSION = '0.0.7';
use strict;
use warnings;
# TODO implement this

sub perl_to_native {
	...
}

sub perl_to_native_post {
	...
}

sub ffi_custom_type_api_1 {
	{
		'native_type' => 'opaque',
		'perl_to_native' => \&perl_to_native,
		'perl_to_native_post' => \&perl_to_native_post,
		argument_count => 3,
	}
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrPtrLenSizeArrayRefScalar.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalar.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalar;
# ABSTRACT: Type to hold pointer and size in a scalar (input only)
$AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalar::VERSION = '0.0.7';
use strict;
use warnings;
use FFI::Platypus;
use FFI::Platypus::API qw(
  arguments_set_pointer
  arguments_set_uint32
  arguments_set_uint64
);
use FFI::Platypus::Buffer qw( scalar_to_buffer );

my @stack;

*arguments_set_size_t
	= FFI::Platypus->new( api => 2 )->sizeof('size_t') == 4
	? \&arguments_set_uint32
	: \&arguments_set_uint64;

sub perl_to_native {
	my($pointer, $size) = scalar_to_buffer($_[0]);
	push @stack, [ $pointer, $size ];
	arguments_set_pointer $_[1], $pointer;
	arguments_set_size_t($_[1]+1, $size);
}

sub perl_to_native_post {
	my($pointer, $size) = @{ pop @stack };
	();
}

sub ffi_custom_type_api_1
{
	{
		native_type         => 'opaque',
		perl_to_native      => \&perl_to_native,
		perl_to_native_post => \&perl_to_native_post,
		argument_count      => 2,
	}
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalar.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalarRef.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalarRef;
# ABSTRACT: Type to hold pointer and size in a scalar reference
$AI::TensorFlow::Libtensorflow::Lib::FFIType::TFPtrSizeScalarRef::VERSION = '0.0.7';
use strict;
use warnings;
use FFI::Platypus::Buffer qw(scalar_to_buffer);
use FFI::Platypus::API qw(
	arguments_set_pointer
	arguments_set_uint32
	arguments_set_uint64
);


my @stack;

# See FFI::Platypus::Type::PointerSizeBuffer
*arguments_set_size_t
	= FFI::Platypus->new( api => 2 )->sizeof('size_t') == 4
	? \&arguments_set_uint32
	: \&arguments_set_uint64;

sub perl_to_native {
	my ($value, $i) = @_;
	die "Value must be a ScalarRef" unless ref $value eq 'SCALAR';

	my ($pointer, $size) = defined $$value
		? scalar_to_buffer($$value)
		: (0, 0);

	push @stack, [ $value, $pointer, $size ];
	arguments_set_pointer( $i  , $pointer);
	arguments_set_size_t(  $i+1, $size);
}

sub perl_to_native_post {
	pop @stack;
	();
}

sub ffi_custom_type_api_1 {
	{
		'native_type' => 'opaque',
		'perl_to_native' => \&perl_to_native,
		'perl_to_native_post' => \&perl_to_native_post,
		argument_count => 2,
	}
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalarRef.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableArrayRef.pm  view on Meta::CPAN

$AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::PackableArrayRef::VERSION = '0.0.7';
use strict;
use warnings;
use FFI::Platypus::Buffer qw(scalar_to_buffer buffer_to_scalar);
use FFI::Platypus::API qw( arguments_set_pointer arguments_set_sint32 );

use Package::Variant;
use Module::Runtime 'module_notional_filename';

sub make_variant {
	my ($class, $target_package, $package, %arguments) = @_;

	die "Invalid pack type, must be single character"
		unless $arguments{pack_type} =~ /^.$/;

	my @stack;

	my $perl_to_native = install perl_to_native => sub {
		my ($value, $i) = @_;
		die "Value must be an ArrayRef"
			unless defined $value && ref $value eq 'ARRAY';
		my $data = pack  $arguments{pack_type} . '*', @$value;
		my $n    = scalar @$value;
		my ($pointer, $size) = scalar_to_buffer($data);

		push @stack, [ \$data, $pointer, $size ];
		arguments_set_pointer( $i  , $pointer);
		arguments_set_sint32(  $i+1, $n);
	};

	my $perl_to_native_post = install perl_to_native_post => sub {
		my ($data_ref, $pointer, $size) = @{ pop @stack };
		$$data_ref = buffer_to_scalar($pointer, $size);
		@{$_[0]} = unpack $arguments{pack_type} . '*', $$data_ref;
		();
	};
	install ffi_custom_type_api_1 => sub {
		{
			native_type => 'opaque',
			argument_count => 2,
			perl_to_native => $perl_to_native,
			perl_to_native_post => $perl_to_native_post,
		}
	};
}

sub make_variant_package_name {
	my ($class, $package, %arguments) = @_;
	$package = "AI::TensorFlow::Libtensorflow::Lib::FFIType::TF${package}";
	die "Won't clobber $package" if $INC{module_notional_filename $package};
	return $package;
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableArrayRef.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableMaybeArrayRef.pm  view on Meta::CPAN

$AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::PackableMaybeArrayRef::VERSION = '0.0.7';
use strict;
use warnings;
use FFI::Platypus::Buffer qw(scalar_to_buffer buffer_to_scalar);
use FFI::Platypus::API qw( arguments_set_pointer arguments_set_sint32 );

use Package::Variant;
use Module::Runtime 'module_notional_filename';

sub make_variant {
	my ($class, $target_package, $package, %arguments) = @_;

	die "Invalid pack type, must be single character"
		unless $arguments{pack_type} =~ /^.$/;

	my @stack;

	my $perl_to_native = install perl_to_native => sub {
		my ($value, $i) = @_;
		if( defined $value ) {
			die "Value must be an ArrayRef" unless ref $value eq 'ARRAY';
			my $data = pack  $arguments{pack_type} . '*', @$value;
			my $n    = scalar @$value;
			my ($pointer, $size) = scalar_to_buffer($data);

			push @stack, [ \$data, $pointer, $size ];
			arguments_set_pointer( $i  , $pointer);
			arguments_set_sint32(  $i+1, $n);
		} else {
			my $data = undef;
			my $n    = -1;
			my ($pointer, $size) = (0, 0);
			push @stack, [ \$data, $pointer, $size ];
			arguments_set_pointer( $i  , $pointer);
			arguments_set_sint32(  $i+1, $n);
		}
	};

	my $perl_to_native_post = install perl_to_native_post => sub {
		my ($data_ref, $pointer, $size) = @{ pop @stack };
		if( ! Scalar::Util::readonly($_[0]) ) {
			$$data_ref = buffer_to_scalar($pointer, $size);
			@{$_[0]} = unpack $arguments{pack_type} . '*', $$data_ref;
		}
		();
	};
	install ffi_custom_type_api_1 => sub {
		{
			native_type => 'opaque',
			argument_count => 2,
			perl_to_native => $perl_to_native,
			perl_to_native_post => $perl_to_native_post,
		}
	};
}

sub make_variant_package_name {
	my ($class, $package, %arguments) = @_;
	$package = "AI::TensorFlow::Libtensorflow::Lib::FFIType::TF${package}";
	die "Won't clobber $package" if $INC{module_notional_filename $package};
	return $package;
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableMaybeArrayRef.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/RecordArrayRef.pm  view on Meta::CPAN

$AI::TensorFlow::Libtensorflow::Lib::FFIType::Variant::RecordArrayRef::VERSION = '0.0.7';
use strict;
use warnings;
use FFI::Platypus::Buffer qw(scalar_to_buffer buffer_to_scalar);
use FFI::Platypus::API qw( arguments_set_pointer arguments_set_sint32 );

use Package::Variant;
use Module::Runtime qw(module_notional_filename is_module_name);

sub make_variant {
	my ($class, $target_package, $package, %arguments) = @_;

	die "Missing/invalid module name: $arguments{record_module}"
		unless is_module_name($arguments{record_module});

	my $record_module = $arguments{record_module};
	my $with_size     = exists $arguments{with_size} ? $arguments{with_size} : 1;

	my @stack;

	my $perl_to_native = install perl_to_native => sub {
		my ($value, $i) = @_;
		my $data = pack "(a*)*", map $$_, @$value;
		my($pointer, $size) = scalar_to_buffer($data);
		my $n = @$value;
		my $sizeof = $size / $n;
		push @stack, [ \$data, $n, $pointer, $size , $sizeof ];
		arguments_set_pointer $i  , $pointer;
		arguments_set_sint32  $i+1, $n if $with_size;
	};

	my $perl_to_native_post = install perl_to_native_post => sub {
		my($data_ref, $n, $pointer, $size, $sizeof) = @{ pop @stack };
		$$data_ref = buffer_to_scalar($pointer, $size);
		@{$_[0]} = map {
			bless \$_, $record_module
		} unpack  "(a${sizeof})*", $$data_ref;
		();
	};

	install ffi_custom_type_api_1 => sub {
		{
			native_type => 'opaque',
			argument_count => ($with_size ? 2 : 1),
			perl_to_native => $perl_to_native,
			perl_to_native_post => $perl_to_native_post,
		}
	};
}

sub make_variant_package_name {
	my ($class, $package, %arguments) = @_;
	$package = "AI::TensorFlow::Libtensorflow::Lib::FFIType::TF${package}";
	die "Won't clobber $package" if $INC{module_notional_filename $package};
	return $package;
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/RecordArrayRef.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/Types.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Lib::Types;
# ABSTRACT: Type library
$AI::TensorFlow::Libtensorflow::Lib::Types::VERSION = '0.0.7';
use strict;
use warnings;
use Type::Library 0.008 -base,
	-declare => [qw(
		TFTensor
		TFGraph
		TFDataType

		Dims
	)];
use Type::Utils -all;
use Types::Standard qw(ArrayRef Int Tuple InstanceOf);

class_type TFTensor => { class => 'AI::TensorFlow::Libtensorflow::Tensor' };

class_type TFGraph => { class => 'AI::TensorFlow::Libtensorflow::Graph' };

class_type TFDataType => { class => 'AI::TensorFlow::Libtensorflow::DataType' };

class_type TFSession => { class => 'AI::TensorFlow::Libtensorflow::Session' };

lib/AI/TensorFlow/Libtensorflow/Lib/Types.pm  view on Meta::CPAN

class_type TFBuffer => { class => 'AI::TensorFlow::Libtensorflow::Buffer' };

class_type TFOperation => { class => 'AI::TensorFlow::Libtensorflow::Operation' };


declare Dims => as ArrayRef[Int];

class_type TFOutput => { class => 'AI::TensorFlow::Libtensorflow::Output' };

declare_coercion "TFOutputFromTuple",
	to_type 'TFOutput',
	from Tuple[InstanceOf['AI::TensorFlow::Libtensorflow::Operation'],Int],
	q {
		AI::TensorFlow::Libtensorflow::Output->New({
			oper  => $_->[0],
			index => $_->[1],
		});
	};

class_type TFInput => { class => 'AI::TensorFlow::Libtensorflow::Input' };

declare_coercion "TFInputFromTuple",
	to_type 'TFInput',
	from Tuple[InstanceOf['AI::TensorFlow::Libtensorflow::Operation'],Int],
	q {
		AI::TensorFlow::Libtensorflow::Input->New({
			oper  => $_->[0],
			index => $_->[1],
		});
	};

1;

__END__

=pod

=encoding UTF-8

=head1 NAME

lib/AI/TensorFlow/Libtensorflow/Lib/Types.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Lib/_Alloc.pm  view on Meta::CPAN


use Feature::Compat::Defer;

# If _aligned_alloc() implementation needs the size to be a multiple of the
# alignment.
our $_ALIGNED_ALLOC_ALIGNMENT_MULTIPLE = 0;

my $ffi = FFI::Platypus->new;
$ffi->lib(undef);
if( $ffi->find_symbol('aligned_alloc') ) {
	# C11 aligned_alloc()
	# NOTE: C11 aligned_alloc not available on Windows.
	# void *aligned_alloc(size_t alignment, size_t size);
	$ffi->attach( [ 'aligned_alloc' => '_aligned_alloc' ] =>
		[ 'size_t', 'size_t' ] => 'opaque' );
	*_aligned_free = *free;
	$_ALIGNED_ALLOC_ALIGNMENT_MULTIPLE = 1;
} else {
	# Pure Perl _aligned_alloc()
	quote_sub '_aligned_alloc', q{
		my ($alignment, $size) = @_;

		# $alignment must fit in 8-bits
		die "\$alignment must be <= 255" if $alignment > 0xFF;

		my $requested_size = $alignment + $size;       # size_t
		my $ptr = malloc($requested_size);             # void*
		my $offset = $alignment - $ptr % $alignment;   # size_t
		my $aligned = $ptr + $offset;                  # void*

		strcpy $aligned - 1, chr($offset);

		return $aligned;
	};
	quote_sub '_aligned_free', q{
		my ($aligned) = @_;
		my $offset = ord(buffer_to_scalar($aligned - 1, 1));
		free( $aligned - $offset );
	};
	$_ALIGNED_ALLOC_ALIGNMENT_MULTIPLE = 0;
}

use Const::Fast;
# See <https://github.com/tensorflow/tensorflow/issues/58112>.
# This is a power-of-two.
const our $EIGEN_MAX_ALIGN_BYTES => do { _tf_alignment(); };

sub _tf_alignment {
	# Bytes of alignment sorted in descending order:
	# NOTE Alignment can not currently be larger than 128-bytes as the pure
	# Perl implementation of _aligned_alloc() only supports alignment of up
	# to 255 bytes (which means 128 bytes is the maximum power-of-two
	# alignment).
	my @alignments = map 2**$_, reverse 0..7;

	# 1-byte element
	my $el = INT8;
	my $el_size = $el->Size;

	my $max_alignment = $alignments[0];
	my $req_size = 2 * $max_alignment + $el_size;
	# All data that is sent to TF_NewTensor here is within the block of
	# memory allocated at $ptr_base.
	my $ptr_base = malloc($req_size);
	defer { free($ptr_base); }

	# start at offset that is aligned with $max_alignment
	my $ptr = $ptr_base + ( $max_alignment - $ptr_base % $max_alignment );

	my $create_tensor_at_alignment = sub {
		my ($n, $dealloc_called) = @_;
		my $offset = $n - $ptr % $n;
		my $ptr_offset = $ptr + $offset;
		my $space_for_data = $req_size - $offset;

		window(my $data, $ptr_offset, $space_for_data);

		return AI::TensorFlow::Libtensorflow::Tensor->New(
			$el, [int($space_for_data/$el_size)], \$data, sub {
				$$dealloc_called = 1
			}
		);
	};

	for my $a_idx (0..@alignments-2) {
		my @dealloc = (0, 0);
		my @t = map {
			$create_tensor_at_alignment->($alignments[$a_idx + $_], \$dealloc[$_]);
		} (0..1);
		return $alignments[$a_idx] if $dealloc[0] == 0 && $dealloc[1] == 1;
	}

	return 1;
}

sub _tf_aligned_alloc {
	my ($class, $size) = @_;
	return _aligned_alloc($EIGEN_MAX_ALIGN_BYTES,
		$_ALIGNED_ALLOC_ALIGNMENT_MULTIPLE
		# since $EIGEN_MAX_ALIGN_BYTES is a power-of-two, use
		# two's complement bit arithmetic
		?  ($size + $EIGEN_MAX_ALIGN_BYTES - 1 ) & -$EIGEN_MAX_ALIGN_BYTES
		: $size
	);
}

sub _tf_aligned_free {
	my ($class, $ptr) = @_;
	_aligned_free($ptr);
}

1;

__END__

=pod

=encoding UTF-8

lib/AI/TensorFlow/Libtensorflow/Lib/_Alloc.pm  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Manual.pod  view on Meta::CPAN

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod  view on Meta::CPAN



=end Pod::Coverage

=head1 FUNCTIONS

=head2 TF_Version

=over 2

  TF_Version returns a string describing version information of the
  TensorFlow library. TensorFlow uses semantic versioning.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_Version(void);

=head2 TF_TensorFromProto

=over 2

  Parsing a serialized TensorProto into a TF_Tensor.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_TensorFromProto(const TF_Buffer* from,
                                                TF_Tensor* to, TF_Status* status);

=head2 TF_NewSessionOptions

=over 2

  Return a new options object.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_SessionOptions* TF_NewSessionOptions(void);

=head2 TF_SetTarget

=over 2

  Set the target in TF_SessionOptions.options.
  target can be empty, a single entry, or a comma separated list of entries.
  Each entry is in one of the following formats :
  "local"
  ip:port
  host:port

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetTarget(TF_SessionOptions* options,
                                          const char* target);

=head2 TF_SetConfig

=over 2

  Set the config in TF_SessionOptions.options.
  config should be a serialized tensorflow.ConfigProto proto.
  If config was not parsed successfully as a ConfigProto, record the
  error information in *status.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetConfig(TF_SessionOptions* options,
                                          const void* proto, size_t proto_len,
                                          TF_Status* status);

=head2 TF_DeleteSessionOptions

=over 2

  Destroy an options object.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSessionOptions(TF_SessionOptions*);

=head2 TF_NewGraph

=over 2

  Return a new graph object.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Graph* TF_NewGraph(void);

=head2 TF_DeleteGraph

=over 2

  Destroy an options object. Graph will be deleted once no more
  TFSession's are referencing it.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteGraph(TF_Graph*);

=head2 TF_GraphSetTensorShape

=over 2

  Sets the shape of the Tensor referenced by `output` in `graph` to
  the shape described by `dims` and `num_dims`.
  
  If the number of dimensions is unknown, `num_dims` must be set to
  -1 and `dims` can be null. If a dimension is unknown, the
  corresponding entry in the `dims` array must be -1.
  
  This does not overwrite the existing shape associated with `output`,
  but merges the input shape with the existing shape.  For example,
  setting a shape of [-1, 2] with an existing shape [2, -1] would set
  a final shape of [2, 2] based on shape merging semantics.
  
  Returns an error into `status` if:
    * `output` is not in `graph`.
    * An invalid shape is being set (e.g., the shape being set
      is incompatible with the existing shape).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphSetTensorShape(TF_Graph* graph,
                                                    TF_Output output,
                                                    const int64_t* dims,
                                                    const int num_dims,
                                                    TF_Status* status);

=head2 TF_GraphGetTensorNumDims

=over 2

  Returns the number of dimensions of the Tensor referenced by `output`
  in `graph`.
  
  If the number of dimensions in the shape is unknown, returns -1.
  
  Returns an error into `status` if:
    * `output` is not in `graph`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_GraphGetTensorNumDims(TF_Graph* graph,
                                                     TF_Output output,
                                                     TF_Status* status);

=head2 TF_GraphGetTensorShape

=over 2

  Returns the shape of the Tensor referenced by `output` in `graph`
  into `dims`. `dims` must be an array large enough to hold `num_dims`
  entries (e.g., the return value of TF_GraphGetTensorNumDims).
  
  If the number of dimensions in the shape is unknown or the shape is
  a scalar, `dims` will remain untouched. Otherwise, each element of
  `dims` will be set corresponding to the size of the dimension. An
  unknown dimension is represented by `-1`.
  
  Returns an error into `status` if:
    * `output` is not in `graph`.
    * `num_dims` does not match the actual number of dimensions.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphGetTensorShape(TF_Graph* graph,
                                                    TF_Output output,
                                                    int64_t* dims, int num_dims,
                                                    TF_Status* status);

=head2 TF_NewOperationLocked

=over 2

  Creates a new operation - see `TF_NewOperation` for more details.
  
  The lock for `graph` must be held when calling this function.
  
  Unless implementing advanced behavior, like custom gradient functions, you
  most likely need to call `TF_NewOperation` instead.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_OperationDescription* TF_NewOperationLocked(
      TF_Graph* graph, const char* op_type, const char* oper_name);

=head2 TF_NewOperation

=over 2

  Operation will only be added to *graph when TF_FinishOperation() is
  called (assuming TF_FinishOperation() does not return an error).
  *graph must not be deleted until after TF_FinishOperation() is
  called.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_OperationDescription* TF_NewOperation(
      TF_Graph* graph, const char* op_type, const char* oper_name);

=head2 TF_SetDevice

=over 2

  Specify the device for `desc`.  Defaults to empty, meaning unconstrained.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetDevice(TF_OperationDescription* desc,
                                          const char* device);

=head2 TF_AddInput

=over 2

  For inputs that take a single tensor.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_AddInput(TF_OperationDescription* desc,
                                         TF_Output input);

=head2 TF_AddInputList

=over 2

  For inputs that take a list of tensors.
  inputs must point to TF_Output[num_inputs].

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_AddInputList(TF_OperationDescription* desc,
                                             const TF_Output* inputs,
                                             int num_inputs);

=head2 TF_AddControlInput

=over 2

  Call once per control input to `desc`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_AddControlInput(TF_OperationDescription* desc,
                                                TF_Operation* input);

=head2 TF_ColocateWith

=over 2

  Request that `desc` be co-located on the device where `op`
  is placed.
  
  Use of this is discouraged since the implementation of device placement is
  subject to change. Primarily intended for internal libraries

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ColocateWith(TF_OperationDescription* desc,
                                             TF_Operation* op);

=head2 TF_SetAttrString

=over 2

  `value` must point to a string of length `length` bytes.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrString(TF_OperationDescription* desc,
                                              const char* attr_name,
                                              const void* value, size_t length);

=head2 TF_SetAttrStringList

=over 2

  `values` and `lengths` each must have lengths `num_values`.
  `values[i]` must point to a string of length `lengths[i]` bytes.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrStringList(TF_OperationDescription* desc,
                                                  const char* attr_name,
                                                  const void* const* values,
                                                  const size_t* lengths,
                                                  int num_values);

=head2 TF_SetAttrInt

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrInt(TF_OperationDescription* desc,
                                           const char* attr_name, int64_t value);

=head2 TF_SetAttrIntList

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrIntList(TF_OperationDescription* desc,
                                               const char* attr_name,
                                               const int64_t* values,
                                               int num_values);

=head2 TF_SetAttrFloat

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrFloat(TF_OperationDescription* desc,
                                             const char* attr_name, float value);

=head2 TF_SetAttrFloatList

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrFloatList(TF_OperationDescription* desc,
                                                 const char* attr_name,
                                                 const float* values,
                                                 int num_values);

=head2 TF_SetAttrBool

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrBool(TF_OperationDescription* desc,
                                            const char* attr_name,
                                            unsigned char value);

=head2 TF_SetAttrBoolList

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrBoolList(TF_OperationDescription* desc,
                                                const char* attr_name,
                                                const unsigned char* values,
                                                int num_values);

=head2 TF_SetAttrType

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrType(TF_OperationDescription* desc,
                                            const char* attr_name,
                                            TF_DataType value);

=head2 TF_SetAttrTypeList

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTypeList(TF_OperationDescription* desc,
                                                const char* attr_name,
                                                const TF_DataType* values,
                                                int num_values);

=head2 TF_SetAttrPlaceholder

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrPlaceholder(TF_OperationDescription* desc,
                                                   const char* attr_name,
                                                   const char* placeholder);

=head2 TF_SetAttrFuncName

=over 2

  Set a 'func' attribute to the specified name.
  `value` must point to a string of length `length` bytes.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrFuncName(TF_OperationDescription* desc,
                                                const char* attr_name,
                                                const char* value, size_t length);

=head2 TF_SetAttrShape

=over 2

  Set `num_dims` to -1 to represent "unknown rank".  Otherwise,
  `dims` points to an array of length `num_dims`.  `dims[i]` must be
  >= -1, with -1 meaning "unknown dimension".

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrShape(TF_OperationDescription* desc,
                                             const char* attr_name,
                                             const int64_t* dims, int num_dims);

=head2 TF_SetAttrShapeList

=over 2

  `dims` and `num_dims` must point to arrays of length `num_shapes`.
  Set `num_dims[i]` to -1 to represent "unknown rank".  Otherwise,
  `dims[i]` points to an array of length `num_dims[i]`.  `dims[i][j]`
  must be >= -1, with -1 meaning "unknown dimension".

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrShapeList(TF_OperationDescription* desc,
                                                 const char* attr_name,
                                                 const int64_t* const* dims,
                                                 const int* num_dims,
                                                 int num_shapes);

=head2 TF_SetAttrTensorShapeProto

=over 2

  `proto` must point to an array of `proto_len` bytes representing a
  binary-serialized TensorShapeProto.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProto(
      TF_OperationDescription* desc, const char* attr_name, const void* proto,
      size_t proto_len, TF_Status* status);

=head2 TF_SetAttrTensorShapeProtoList

=over 2

  `protos` and `proto_lens` must point to arrays of length `num_shapes`.
  `protos[i]` must point to an array of `proto_lens[i]` bytes
  representing a binary-serialized TensorShapeProto.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensorShapeProtoList(
      TF_OperationDescription* desc, const char* attr_name,
      const void* const* protos, const size_t* proto_lens, int num_shapes,
      TF_Status* status);

=head2 TF_SetAttrTensor

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensor(TF_OperationDescription* desc,
                                              const char* attr_name,
                                              TF_Tensor* value,
                                              TF_Status* status);

=head2 TF_SetAttrTensorList

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrTensorList(TF_OperationDescription* desc,
                                                  const char* attr_name,
                                                  TF_Tensor* const* values,
                                                  int num_values,
                                                  TF_Status* status);

=head2 TF_SetAttrValueProto

=over 2

  `proto` should point to a sequence of bytes of length `proto_len`
  representing a binary serialization of an AttrValue protocol
  buffer.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SetAttrValueProto(TF_OperationDescription* desc,
                                                  const char* attr_name,
                                                  const void* proto,
                                                  size_t proto_len,
                                                  TF_Status* status);

=head2 TF_FinishOperationLocked

=over 2

  Adds this operation to the graph - see `TF_FinishOperation` for more details.
  
  The lock for `graph` must be held when calling this function.
  
  Unless implementing advanced behavior, like custom gradient functions, you
  most likely need to call `TF_FinishOperation` instead.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Operation* TF_FinishOperationLocked(
      TF_OperationDescription* desc, TF_Status* status);

=head2 TF_FinishOperation

=over 2

  If this function succeeds:
    * *status is set to an OK value,
    * a TF_Operation is added to the graph,
    * a non-null value pointing to the added operation is returned --
      this value is valid until the underlying graph is deleted.
  Otherwise:
    * *status is set to a non-OK value,
    * the graph is not modified,
    * a null value is returned.
  In either case, it deletes `desc`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Operation* TF_FinishOperation(
      TF_OperationDescription* desc, TF_Status* status);

=head2 TF_OperationName

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_OperationName(TF_Operation* oper);

=head2 TF_OperationOpType

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_OperationOpType(TF_Operation* oper);

=head2 TF_OperationDevice

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_OperationDevice(TF_Operation* oper);

=head2 TF_OperationNumOutputs

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationNumOutputs(TF_Operation* oper);

=head2 TF_OperationOutputType

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_OperationOutputType(TF_Output oper_out);

=head2 TF_OperationOutputListLength

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationOutputListLength(TF_Operation* oper,
                                                         const char* arg_name,
                                                         TF_Status* status);

=head2 TF_OperationNumInputs

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationNumInputs(TF_Operation* oper);

=head2 TF_OperationInputType

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_OperationInputType(TF_Input oper_in);

=head2 TF_OperationInputListLength

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationInputListLength(TF_Operation* oper,
                                                        const char* arg_name,
                                                        TF_Status* status);

=head2 TF_OperationInput

=over 2

  In this code:
    TF_Output producer = TF_OperationInput(consumer);
  There is an edge from producer.oper's output (given by
  producer.index) to consumer.oper's input (given by consumer.index).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Output TF_OperationInput(TF_Input oper_in);

=head2 TF_OperationAllInputs

=over 2

  Get list of all inputs of a specific operation.  `inputs` must point to
  an array of length at least `max_inputs` (ideally set to
  TF_OperationNumInputs(oper)).  Beware that a concurrent
  modification of the graph can increase the number of inputs of
  an operation.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationAllInputs(TF_Operation* oper,
                                                   TF_Output* inputs,
                                                   int max_inputs);

=head2 TF_OperationOutputNumConsumers

=over 2

  Get the number of current consumers of a specific output of an
  operation.  Note that this number can change when new operations
  are added to the graph.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationOutputNumConsumers(TF_Output oper_out);

=head2 TF_OperationOutputConsumers

=over 2

  Get list of all current consumers of a specific output of an
  operation.  `consumers` must point to an array of length at least
  `max_consumers` (ideally set to
  TF_OperationOutputNumConsumers(oper_out)).  Beware that a concurrent
  modification of the graph can increase the number of consumers of
  an operation.  Returns the number of output consumers (should match
  TF_OperationOutputNumConsumers(oper_out)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationOutputConsumers(TF_Output oper_out,
                                                        TF_Input* consumers,
                                                        int max_consumers);

=head2 TF_OperationNumControlInputs

=over 2

  Get the number of control inputs to an operation.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationNumControlInputs(TF_Operation* oper);

=head2 TF_OperationGetControlInputs

=over 2

  Get list of all control inputs to an operation.  `control_inputs` must
  point to an array of length `max_control_inputs` (ideally set to
  TF_OperationNumControlInputs(oper)).  Returns the number of control
  inputs (should match TF_OperationNumControlInputs(oper)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationGetControlInputs(
      TF_Operation* oper, TF_Operation** control_inputs, int max_control_inputs);

=head2 TF_OperationNumControlOutputs

=over 2

  Get the number of operations that have `*oper` as a control input.
  Note that this number can change when new operations are added to
  the graph.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationNumControlOutputs(TF_Operation* oper);

=head2 TF_OperationGetControlOutputs

=over 2

  Get the list of operations that have `*oper` as a control input.
  `control_outputs` must point to an array of length at least
  `max_control_outputs` (ideally set to
  TF_OperationNumControlOutputs(oper)). Beware that a concurrent
  modification of the graph can increase the number of control
  outputs.  Returns the number of control outputs (should match
  TF_OperationNumControlOutputs(oper)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationGetControlOutputs(
      TF_Operation* oper, TF_Operation** control_outputs,
      int max_control_outputs);

=head2 TF_OperationGetAttrMetadata

=over 2

  Returns metadata about the value of the attribute `attr_name` of `oper`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_AttrMetadata TF_OperationGetAttrMetadata(
      TF_Operation* oper, const char* attr_name, TF_Status* status);

=head2 TF_OperationGetAttrString

=over 2

  Fills in `value` with the value of the attribute `attr_name`.  `value` must
  point to an array of length at least `max_length` (ideally set to
  TF_AttrMetadata.total_size from TF_OperationGetAttrMetadata(oper,
  attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrString(TF_Operation* oper,
                                                       const char* attr_name,
                                                       void* value,
                                                       size_t max_length,
                                                       TF_Status* status);

=head2 TF_OperationGetAttrStringList

=over 2

  Get the list of strings in the value of the attribute `attr_name`.  Fills in
  `values` and `lengths`, each of which must point to an array of length at
  least `max_values`.
  
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  Ideally, max_values would be set to
  TF_AttrMetadata.list_size and `storage` would be at least
  TF_AttrMetadata.total_size, obtained from TF_OperationGetAttrMetadata(oper,
  attr_name).
  
  Fails if storage_size is too small to hold the requested number of strings.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrStringList(
      TF_Operation* oper, const char* attr_name, void** values, size_t* lengths,
      int max_values, void* storage, size_t storage_size, TF_Status* status);

=head2 TF_OperationGetAttrInt

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrInt(TF_Operation* oper,
                                                    const char* attr_name,
                                                    int64_t* value,
                                                    TF_Status* status);

=head2 TF_OperationGetAttrIntList

=over 2

  Fills in `values` with the value of the attribute `attr_name` of `oper`.
  `values` must point to an array of length at least `max_values` (ideally set
  TF_AttrMetadata.list_size from TF_OperationGetAttrMetadata(oper,
  attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrIntList(TF_Operation* oper,
                                                        const char* attr_name,
                                                        int64_t* values,
                                                        int max_values,
                                                        TF_Status* status);

=head2 TF_OperationGetAttrFloat

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrFloat(TF_Operation* oper,
                                                      const char* attr_name,
                                                      float* value,
                                                      TF_Status* status);

=head2 TF_OperationGetAttrFloatList

=over 2

  Fills in `values` with the value of the attribute `attr_name` of `oper`.
  `values` must point to an array of length at least `max_values` (ideally set
  to TF_AttrMetadata.list_size from TF_OperationGetAttrMetadata(oper,
  attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrFloatList(TF_Operation* oper,
                                                          const char* attr_name,
                                                          float* values,
                                                          int max_values,
                                                          TF_Status* status);

=head2 TF_OperationGetAttrBool

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrBool(TF_Operation* oper,
                                                     const char* attr_name,
                                                     unsigned char* value,
                                                     TF_Status* status);

=head2 TF_OperationGetAttrBoolList

=over 2

  Fills in `values` with the value of the attribute `attr_name` of `oper`.
  `values` must point to an array of length at least `max_values` (ideally set
  to TF_AttrMetadata.list_size from TF_OperationGetAttrMetadata(oper,
  attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrBoolList(TF_Operation* oper,
                                                         const char* attr_name,
                                                         unsigned char* values,
                                                         int max_values,
                                                         TF_Status* status);

=head2 TF_OperationGetAttrType

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrType(TF_Operation* oper,
                                                     const char* attr_name,
                                                     TF_DataType* value,
                                                     TF_Status* status);

=head2 TF_OperationGetAttrTypeList

=over 2

  Fills in `values` with the value of the attribute `attr_name` of `oper`.
  `values` must point to an array of length at least `max_values` (ideally set
  to TF_AttrMetadata.list_size from TF_OperationGetAttrMetadata(oper,
  attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrTypeList(TF_Operation* oper,
                                                         const char* attr_name,
                                                         TF_DataType* values,
                                                         int max_values,
                                                         TF_Status* status);

=head2 TF_OperationGetAttrShape

=over 2

  Fills in `value` with the value of the attribute `attr_name` of `oper`.
  `values` must point to an array of length at least `num_dims` (ideally set to
  TF_Attr_Meta.size from TF_OperationGetAttrMetadata(oper, attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrShape(TF_Operation* oper,
                                                      const char* attr_name,
                                                      int64_t* value,
                                                      int num_dims,
                                                      TF_Status* status);

=head2 TF_OperationGetAttrShapeList

=over 2

  Fills in `dims` with the list of shapes in the attribute `attr_name` of
  `oper` and `num_dims` with the corresponding number of dimensions. On return,
  for every i where `num_dims[i]` > 0, `dims[i]` will be an array of
  `num_dims[i]` elements. A value of -1 for `num_dims[i]` indicates that the
  i-th shape in the list is unknown.
  
  The elements of `dims` will point to addresses in `storage` which must be
  large enough to hold at least `storage_size` int64_ts.  Ideally, `num_shapes`
  would be set to TF_AttrMetadata.list_size and `storage_size` would be set to
  TF_AttrMetadata.total_size from TF_OperationGetAttrMetadata(oper,
  attr_name).
  
  Fails if storage_size is insufficient to hold the requested shapes.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrShapeList(
      TF_Operation* oper, const char* attr_name, int64_t** dims, int* num_dims,
      int num_shapes, int64_t* storage, int storage_size, TF_Status* status);

=head2 TF_OperationGetAttrTensorShapeProto

=over 2

  Sets `value` to the binary-serialized TensorShapeProto of the value of
  `attr_name` attribute of `oper`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrTensorShapeProto(
      TF_Operation* oper, const char* attr_name, TF_Buffer* value,
      TF_Status* status);

=head2 TF_OperationGetAttrTensorShapeProtoList

=over 2

  Fills in `values` with binary-serialized TensorShapeProto values of the
  attribute `attr_name` of `oper`. `values` must point to an array of length at
  least `num_values` (ideally set to TF_AttrMetadata.list_size from
  TF_OperationGetAttrMetadata(oper, attr_name)).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrTensorShapeProtoList(
      TF_Operation* oper, const char* attr_name, TF_Buffer** values,
      int max_values, TF_Status* status);

=head2 TF_OperationGetAttrTensor

=over 2

  Gets the TF_Tensor valued attribute of `attr_name` of `oper`.
  
  Allocates a new TF_Tensor which the caller is expected to take
  ownership of (and can deallocate using TF_DeleteTensor).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrTensor(TF_Operation* oper,
                                                       const char* attr_name,
                                                       TF_Tensor** value,
                                                       TF_Status* status);

=head2 TF_OperationGetAttrTensorList

=over 2

  Fills in `values` with the TF_Tensor values of the attribute `attr_name` of
  `oper`. `values` must point to an array of TF_Tensor* of length at least
  `max_values` (ideally set to TF_AttrMetadata.list_size from
  TF_OperationGetAttrMetadata(oper, attr_name)).
  
  The caller takes ownership of all the non-null TF_Tensor* entries in `values`
  (which can be deleted using TF_DeleteTensor(values[i])).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrTensorList(TF_Operation* oper,
                                                           const char* attr_name,
                                                           TF_Tensor** values,
                                                           int max_values,
                                                           TF_Status* status);

=head2 TF_OperationGetAttrValueProto

=over 2

  Sets `output_attr_value` to the binary-serialized AttrValue proto
  representation of the value of the `attr_name` attr of `oper`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrValueProto(
      TF_Operation* oper, const char* attr_name, TF_Buffer* output_attr_value,
      TF_Status* status);

=head2 TF_OperationGetNumAttrs

=over 2

  Get the number of attributes the operation has.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationGetNumAttrs(TF_Operation* oper);

=head2 TF_OperationGetAttrNameLength

=over 2

  Get the length of the name of the ith attribute, or -1 if there is not an
  ith attribute.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_OperationGetAttrNameLength(TF_Operation* oper,
                                                          int i);

=head2 TF_OperationGetAttrName

=over 2

  Get the name of the ith attribute.  output should have the size of
  TF_OperationGetAttrNameLength(oper, i).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationGetAttrName(TF_Operation* oper, int i,
                                                     char* output,
                                                     TF_Status* status);

=head2 TF_GraphOperationByName

=over 2

  Returns the operation in the graph with `oper_name`. Returns nullptr if
  no operation found.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Operation* TF_GraphOperationByName(
      TF_Graph* graph, const char* oper_name);

=head2 TF_GraphNextOperation

=over 2

  Iterate through the operations of a graph.  To use:
  size_t pos = 0;
  TF_Operation* oper;
  while ((oper = TF_GraphNextOperation(graph, &pos)) != nullptr) {
    DoSomethingWithOperation(oper);
  }

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Operation* TF_GraphNextOperation(TF_Graph* graph,
                                                            size_t* pos);

=head2 TF_GraphToGraphDef

=over 2

  Write out a serialized representation of `graph` (as a GraphDef protocol
  message) to `output_graph_def` (allocated by TF_NewBuffer()).
  `output_graph_def`'s underlying buffer will be freed when TF_DeleteBuffer()
  is called.
  
  May fail on very large graphs in the future.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphToGraphDef(TF_Graph* graph,
                                                TF_Buffer* output_graph_def,
                                                TF_Status* status);

=head2 TF_GraphGetOpDef

=over 2

  Returns the serialized OpDef proto with name `op_name`, or a bad status if no
  such op exists. This can return OpDefs of functions copied into the graph.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphGetOpDef(TF_Graph* graph,
                                              const char* op_name,
                                              TF_Buffer* output_op_def,
                                              TF_Status* status);

=head2 TF_GraphVersions

=over 2

  Returns the serialized VersionDef proto for this graph.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphVersions(TF_Graph* graph,
                                              TF_Buffer* output_version_def,
                                              TF_Status* status);

=head2 TF_NewImportGraphDefOptions

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_ImportGraphDefOptions* TF_NewImportGraphDefOptions(
      void);

=head2 TF_DeleteImportGraphDefOptions

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteImportGraphDefOptions(
      TF_ImportGraphDefOptions* opts);

=head2 TF_ImportGraphDefOptionsSetPrefix

=over 2

  Set the prefix to be prepended to the names of nodes in `graph_def` that will
  be imported into `graph`. `prefix` is copied and has no lifetime
  requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsSetPrefix(
      TF_ImportGraphDefOptions* opts, const char* prefix);

=head2 TF_ImportGraphDefOptionsSetDefaultDevice

=over 2

  Set the execution device for nodes in `graph_def`.
  Only applies to nodes where a device was not already explicitly specified.
  `device` is copied and has no lifetime requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsSetDefaultDevice(
      TF_ImportGraphDefOptions* opts, const char* device);

=head2 TF_ImportGraphDefOptionsSetUniquifyNames

=over 2

  Set whether to uniquify imported operation names. If true, imported operation
  names will be modified if their name already exists in the graph. If false,
  conflicting names will be treated as an error. Note that this option has no
  effect if a prefix is set, since the prefix will guarantee all names are
  unique. Defaults to false.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsSetUniquifyNames(
      TF_ImportGraphDefOptions* opts, unsigned char uniquify_names);

=head2 TF_ImportGraphDefOptionsSetUniquifyPrefix

=over 2

  If true, the specified prefix will be modified if it already exists as an
  operation name or prefix in the graph. If false, a conflicting prefix will be
  treated as an error. This option has no effect if no prefix is specified.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsSetUniquifyPrefix(
      TF_ImportGraphDefOptions* opts, unsigned char uniquify_prefix);

=head2 TF_ImportGraphDefOptionsAddInputMapping

=over 2

  Set any imported nodes with input `src_name:src_index` to have that input
  replaced with `dst`. `src_name` refers to a node in the graph to be imported,
  `dst` references a node already existing in the graph being imported into.
  `src_name` is copied and has no lifetime requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddInputMapping(
      TF_ImportGraphDefOptions* opts, const char* src_name, int src_index,
      TF_Output dst);

=head2 TF_ImportGraphDefOptionsRemapControlDependency

=over 2

  Set any imported nodes with control input `src_name` to have that input
  replaced with `dst`. `src_name` refers to a node in the graph to be imported,
  `dst` references an operation already existing in the graph being imported
  into. `src_name` is copied and has no lifetime requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsRemapControlDependency(
      TF_ImportGraphDefOptions* opts, const char* src_name, TF_Operation* dst);

=head2 TF_ImportGraphDefOptionsAddControlDependency

=over 2

  Cause the imported graph to have a control dependency on `oper`. `oper`
  should exist in the graph being imported into.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddControlDependency(
      TF_ImportGraphDefOptions* opts, TF_Operation* oper);

=head2 TF_ImportGraphDefOptionsAddReturnOutput

=over 2

  Add an output in `graph_def` to be returned via the `return_outputs` output
  parameter of TF_GraphImportGraphDef(). If the output is remapped via an input
  mapping, the corresponding existing tensor in `graph` will be returned.
  `oper_name` is copied and has no lifetime requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddReturnOutput(
      TF_ImportGraphDefOptions* opts, const char* oper_name, int index);

=head2 TF_ImportGraphDefOptionsNumReturnOutputs

=over 2

  Returns the number of return outputs added via
  TF_ImportGraphDefOptionsAddReturnOutput().

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_ImportGraphDefOptionsNumReturnOutputs(
      const TF_ImportGraphDefOptions* opts);

=head2 TF_ImportGraphDefOptionsAddReturnOperation

=over 2

  Add an operation in `graph_def` to be returned via the `return_opers` output
  parameter of TF_GraphImportGraphDef(). `oper_name` is copied and has no
  lifetime requirements.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefOptionsAddReturnOperation(
      TF_ImportGraphDefOptions* opts, const char* oper_name);

=head2 TF_ImportGraphDefOptionsNumReturnOperations

=over 2

  Returns the number of return operations added via
  TF_ImportGraphDefOptionsAddReturnOperation().

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_ImportGraphDefOptionsNumReturnOperations(
      const TF_ImportGraphDefOptions* opts);

=head2 TF_ImportGraphDefResultsReturnOutputs

=over 2

  Fetches the return outputs requested via
  TF_ImportGraphDefOptionsAddReturnOutput(). The number of fetched outputs is
  returned in `num_outputs`. The array of return outputs is returned in
  `outputs`. `*outputs` is owned by and has the lifetime of `results`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOutputs(
      TF_ImportGraphDefResults* results, int* num_outputs, TF_Output** outputs);

=head2 TF_ImportGraphDefResultsReturnOperations

=over 2

  Fetches the return operations requested via
  TF_ImportGraphDefOptionsAddReturnOperation(). The number of fetched
  operations is returned in `num_opers`. The array of return operations is
  returned in `opers`. `*opers` is owned by and has the lifetime of `results`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsReturnOperations(
      TF_ImportGraphDefResults* results, int* num_opers, TF_Operation*** opers);

=head2 TF_ImportGraphDefResultsMissingUnusedInputMappings

=over 2

  Fetches any input mappings requested via
  TF_ImportGraphDefOptionsAddInputMapping() that didn't appear in the GraphDef
  and weren't used as input to any node in the imported graph def. The number
  of fetched mappings is returned in `num_missing_unused_input_mappings`. The
  array of each mapping's source node name is returned in `src_names`, and the
  array of each mapping's source index is returned in `src_indexes`.
  
  `*src_names`, `*src_indexes`, and the memory backing each string in
  `src_names` are owned by and have the lifetime of `results`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ImportGraphDefResultsMissingUnusedInputMappings(
      TF_ImportGraphDefResults* results, int* num_missing_unused_input_mappings,
      const char*** src_names, int** src_indexes);

=head2 TF_DeleteImportGraphDefResults

=over 2

  Deletes a results object returned by TF_GraphImportGraphDefWithResults().

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteImportGraphDefResults(
      TF_ImportGraphDefResults* results);

=head2 TF_GraphImportGraphDefWithResults

=over 2

  Import the graph serialized in `graph_def` into `graph`.  Returns nullptr and
  a bad status on error. Otherwise, returns a populated
  TF_ImportGraphDefResults instance. The returned instance must be deleted via
  TF_DeleteImportGraphDefResults().

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_ImportGraphDefResults*
  TF_GraphImportGraphDefWithResults(TF_Graph* graph, const TF_Buffer* graph_def,
                                    const TF_ImportGraphDefOptions* options,
                                    TF_Status* status);

=head2 TF_GraphImportGraphDefWithReturnOutputs

=over 2

  Import the graph serialized in `graph_def` into `graph`.
  Convenience function for when only return outputs are needed.
  
  `num_return_outputs` must be the number of return outputs added (i.e. the
  result of TF_ImportGraphDefOptionsNumReturnOutputs()).  If
  `num_return_outputs` is non-zero, `return_outputs` must be of length
  `num_return_outputs`. Otherwise it can be null.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphImportGraphDefWithReturnOutputs(
      TF_Graph* graph, const TF_Buffer* graph_def,
      const TF_ImportGraphDefOptions* options, TF_Output* return_outputs,
      int num_return_outputs, TF_Status* status);

=head2 TF_GraphImportGraphDef

=over 2

  Import the graph serialized in `graph_def` into `graph`.
  Convenience function for when no results are needed.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphImportGraphDef(
      TF_Graph* graph, const TF_Buffer* graph_def,
      const TF_ImportGraphDefOptions* options, TF_Status* status);

=head2 TF_GraphCopyFunction

=over 2

  Adds a copy of function `func` and optionally its gradient function `grad`
  to `g`. Once `func`/`grad` is added to `g`, it can be called by creating
  an operation using the function's name.
  Any changes to `func`/`grad` (including deleting it) done after this method
  returns, won't affect the copy of `func`/`grad` in `g`.
  If `func` or `grad` are already in `g`, TF_GraphCopyFunction has no
  effect on them, but can establish the function->gradient relationship
  between them if `func` does not already have a gradient. If `func` already
  has a gradient different from `grad`, an error is returned.
  
  `func` must not be null.
  If `grad` is null and `func` is not in `g`, `func` is added without a
  gradient.
  If `grad` is null and `func` is in `g`, TF_GraphCopyFunction is a noop.
  `grad` must have appropriate signature as described in the doc of
  GradientDef in tensorflow/core/framework/function.proto.
  
  If successful, status is set to OK and `func` and `grad` are added to `g`.
  Otherwise, status is set to the encountered error and `g` is unmodified.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_GraphCopyFunction(TF_Graph* g,
                                                  const TF_Function* func,
                                                  const TF_Function* grad,
                                                  TF_Status* status);

=head2 TF_GraphNumFunctions

=over 2

  Returns the number of TF_Functions registered in `g`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_GraphNumFunctions(TF_Graph* g);

=head2 TF_GraphGetFunctions

=over 2

  Fills in `funcs` with the TF_Function* registered in `g`.
  `funcs` must point to an array of TF_Function* of length at least
  `max_func`. In usual usage, max_func should be set to the result of
  TF_GraphNumFunctions(g). In this case, all the functions registered in
  `g` will be returned. Else, an unspecified subset.
  
  If successful, returns the number of TF_Function* successfully set in
  `funcs` and sets status to OK. The caller takes ownership of
  all the returned TF_Functions. They must be deleted with TF_DeleteFunction.
  On error, returns 0, sets status to the encountered error, and the contents
  of funcs will be undefined.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_GraphGetFunctions(TF_Graph* g, TF_Function** funcs,
                                                 int max_func, TF_Status* status);

=head2 TF_OperationToNodeDef

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_OperationToNodeDef(TF_Operation* oper,
                                                   TF_Buffer* output_node_def,
                                                   TF_Status* status);

=head2 TF_NewWhile

=over 2

  Creates a TF_WhileParams for creating a while loop in `g`. `inputs` are
  outputs that already exist in `g` used as initial values for the loop
  variables.
  
  The returned TF_WhileParams will have all fields initialized except
  `cond_output`, `body_outputs`, and `name`. The `body_outputs` buffer will be
  allocated to size `ninputs`. The caller should build `cond_graph` and
  `body_graph` starting from the inputs, and store the final outputs in
  `cond_output` and `body_outputs`.
  
  If `status` is OK, the caller must call either TF_FinishWhile or
  TF_AbortWhile on the returned TF_WhileParams. If `status` isn't OK, the
  returned TF_WhileParams is not valid, and the caller should not call
  TF_FinishWhile() or TF_AbortWhile().
  
  Missing functionality (TODO):
  - Gradients
  - Reference-type inputs
  - Directly referencing external tensors from the cond/body graphs (this is
    possible in the Python API)

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_WhileParams TF_NewWhile(TF_Graph* g, TF_Output* inputs,
                                                   int ninputs,
                                                   TF_Status* status);

=head2 TF_FinishWhile

=over 2

  Builds the while loop specified by `params` and returns the output tensors of
  the while loop in `outputs`. `outputs` should be allocated to size
  `params.ninputs`.
  
  `params` is no longer valid once this returns.
  
  Either this or TF_AbortWhile() must be called after a successful
  TF_NewWhile() call.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FinishWhile(const TF_WhileParams* params,
                                            TF_Status* status,
                                            TF_Output* outputs);

=head2 TF_AbortWhile

=over 2

  Frees `params`s resources without building a while loop. `params` is no
  longer valid after this returns. Either this or TF_FinishWhile() must be
  called after a successful TF_NewWhile() call.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_AbortWhile(const TF_WhileParams* params);

=head2 TF_AddGradients

=over 2

  Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
  i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
  
  `dx` are used as initial gradients (which represent the symbolic partial
  derivatives of some loss function `L` w.r.t. `y`).
  `dx` must be nullptr or have size `ny`.
  If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
  shapes in `y`.
  The partial derivatives are returned in `dy`. `dy` should be allocated to
  size `nx`.
  
  Gradient nodes are automatically named under the "gradients/" prefix. To
  guarantee name uniqueness, subsequent calls to the same graph will
  append an incremental tag to the prefix: "gradients_1/", "gradients_2/", ...
  See TF_AddGradientsWithPrefix, which provides a means to specify a custom
  name prefix for operations added to a graph to compute the gradients.
  
  WARNING: This function does not yet support all the gradients that python
  supports. See
  https://www.tensorflow.org/code/tensorflow/cc/gradients/README.md
  for instructions on how to add C++ more gradients.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT void TF_AddGradients(TF_Graph* g, TF_Output* y, int ny,
                                      TF_Output* x, int nx, TF_Output* dx,
                                      TF_Status* status, TF_Output* dy);

=head2 TF_AddGradientsWithPrefix

=over 2

  Adds operations to compute the partial derivatives of sum of `y`s w.r.t `x`s,
  i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
  This is a variant of TF_AddGradients that allows to caller to pass a custom
  name prefix to the operations added to a graph to compute the gradients.
  
  `dx` are used as initial gradients (which represent the symbolic partial
  derivatives of some loss function `L` w.r.t. `y`).
  `dx` must be nullptr or have size `ny`.
  If `dx` is nullptr, the implementation will use dx of `OnesLike` for all
  shapes in `y`.
  The partial derivatives are returned in `dy`. `dy` should be allocated to
  size `nx`.
  `prefix` names the scope into which all gradients operations are being added.
  `prefix` must be unique within the provided graph otherwise this operation
  will fail. If `prefix` is nullptr, the default prefixing behaviour takes
  place, see TF_AddGradients for more details.
  
  WARNING: This function does not yet support all the gradients that python
  supports. See
  https://www.tensorflow.org/code/tensorflow/cc/gradients/README.md
  for instructions on how to add C++ more gradients.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT void TF_AddGradientsWithPrefix(TF_Graph* g, const char* prefix,
                                                TF_Output* y, int ny,
                                                TF_Output* x, int nx,
                                                TF_Output* dx, TF_Status* status,
                                                TF_Output* dy);

=head2 TF_GraphToFunction

=over 2

  Create a TF_Function from a TF_Graph
  
  Params:
   fn_body - the graph whose operations (or subset of whose operations) will be
             converted to TF_Function.
   fn_name - the name of the new TF_Function. Should match the operation
             name (OpDef.name) regexp [A-Z][A-Za-z0-9_.\\-/]*.
             If `append_hash_to_fn_name` is false, `fn_name` must be distinct
             from other function and operation names (at least those
             registered in graphs where this function will be used).
   append_hash_to_fn_name - Must be 0 or 1. If set to 1, the actual name
                            of the function will be `fn_name` appended with
                            '_<hash_of_this_function's_definition>'.
                            If set to 0, the function's name will be `fn_name`.
   num_opers - `num_opers` contains the number of elements in the `opers` array
               or a special value of -1 meaning that no array is given.
               The distinction between an empty array of operations and no
               array of operations is necessary to distinguish the case of
               creating a function with no body (e.g. identity or permutation)
               and the case of creating a function whose body contains all
               the nodes in the graph (except for the automatic skipping, see
               below).
   opers - Array of operations to become the body of the function or null.
           - If no array is given (`num_opers` = -1), all the
           operations in `fn_body` will become part of the function
           except operations referenced in `inputs`. These operations
           must have a single output (these operations are typically
           placeholders created for the sole purpose of representing
           an input. We can relax this constraint if there are
           compelling use cases).
           - If an array is given (`num_opers` >= 0), all operations
           in it will become part of the function. In particular, no
           automatic skipping of dummy input operations is performed.
   ninputs - number of elements in `inputs` array
   inputs - array of TF_Outputs that specify the inputs to the function.
            If `ninputs` is zero (the function takes no inputs), `inputs`
            can be null. The names used for function inputs are normalized
            names of the operations (usually placeholders) pointed to by
            `inputs`. These operation names should start with a letter.
            Normalization will convert all letters to lowercase and
            non-alphanumeric characters to '_' to make resulting names match
            the "[a-z][a-z0-9_]*" pattern for operation argument names.
            `inputs` cannot contain the same tensor twice.
   noutputs - number of elements in `outputs` array
   outputs - array of TF_Outputs that specify the outputs of the function.
             If `noutputs` is zero (the function returns no outputs), `outputs`
             can be null. `outputs` can contain the same tensor more than once.
   output_names - The names of the function's outputs. `output_names` array
                  must either have the same length as `outputs`
                  (i.e. `noutputs`) or be null. In the former case,
                  the names should match the regular expression for ArgDef
                  names - "[a-z][a-z0-9_]*". In the latter case,
                  names for outputs will be generated automatically.
   opts - various options for the function, e.g. XLA's inlining control.
   description - optional human-readable description of this function.
   status - Set to OK on success and an appropriate error on failure.
  
  Note that when the same TF_Output is listed as both an input and an output,
  the corresponding function's output will equal to this input,
  instead of the original node's output.
  
  Callers must also satisfy the following constraints:
  - `inputs` cannot refer to TF_Outputs within a control flow context. For
    example, one cannot use the output of "switch" node as input.
  - `inputs` and `outputs` cannot have reference types. Reference types are
    not exposed through C API and are being replaced with Resources. We support
    reference types inside function's body to support legacy code. Do not
    use them in new code.
  - Every node in the function's body must have all of its inputs (including
    control inputs). In other words, for every node in the body, each input
    must be either listed in `inputs` or must come from another node in
    the body. In particular, it is an error to have a control edge going from
    a node outside of the body into a node in the body. This applies to control
    edges going from nodes referenced in `inputs` to nodes in the body when
    the former nodes are not in the body (automatically skipped or not
    included in explicitly specified body).
  
  Returns:
   On success, a newly created TF_Function instance. It must be deleted by
   calling TF_DeleteFunction.
  
   On failure, null.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Function* TF_GraphToFunction(
      const TF_Graph* fn_body, const char* fn_name,
      unsigned char append_hash_to_fn_name, int num_opers,
      const TF_Operation* const* opers, int ninputs, const TF_Output* inputs,
      int noutputs, const TF_Output* outputs, const char* const* output_names,
      const TF_FunctionOptions* opts, const char* description, TF_Status* status);

=head2 TF_GraphToFunctionWithControlOutputs

=over 2

  Similar to TF_GraphToFunction but allows specifying control outputs of the
  function.
  
   The arguments of TF_GraphToFunction have the same meaning, but the new
   arguments are as follows:
  
     ncontrol_outputs: Number of control outputs of the function.
     control_outputs: vector of TF_Operation objects to be marked as control
       outputs of the function. Operations marked as control outputs are
       guaranteed to execute.
     control_output_names: Optional. If not nullptr, vector of strings, one
       per control output, with their names to be added to the function's
       OpDef.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Function* TF_GraphToFunctionWithControlOutputs(
      const TF_Graph* fn_body, const char* fn_name,
      unsigned char append_hash_to_fn_name, int num_opers,
      const TF_Operation* const* opers, int ninputs, const TF_Output* inputs,
      int noutputs, const TF_Output* outputs, const char* const* output_names,
      int ncontrol_outputs, const TF_Operation* const* control_outputs,
      const char* const* control_output_names, const TF_FunctionOptions* opts,
      const char* description, TF_Status* status);

=head2 TF_FunctionName

=over 2

  Returns the name of the graph function.
  The return value points to memory that is only usable until the next
  mutation to *func.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_FunctionName(TF_Function* func);

=head2 TF_FunctionToFunctionDef

=over 2

  Write out a serialized representation of `func` (as a FunctionDef protocol
  message) to `output_func_def` (allocated by TF_NewBuffer()).
  `output_func_def`'s underlying buffer will be freed when TF_DeleteBuffer()
  is called.
  
  May fail on very large graphs in the future.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionToFunctionDef(TF_Function* func,
                                                      TF_Buffer* output_func_def,
                                                      TF_Status* status);

=head2 TF_FunctionImportFunctionDef

=over 2

  Construct and return the function whose FunctionDef representation is
  serialized in `proto`. `proto_len` must equal the number of bytes
  pointed to by `proto`.
  Returns:
   On success, a newly created TF_Function instance. It must be deleted by
   calling TF_DeleteFunction.
  
   On failure, null.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Function* TF_FunctionImportFunctionDef(
      const void* proto, size_t proto_len, TF_Status* status);

=head2 TF_FunctionSetAttrValueProto

=over 2

  Sets function attribute named `attr_name` to value stored in `proto`.
  If this attribute is already set to another value, it is overridden.
  `proto` should point to a sequence of bytes of length `proto_len`
  representing a binary serialization of an AttrValue protocol
  buffer.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionSetAttrValueProto(TF_Function* func,
                                                          const char* attr_name,
                                                          const void* proto,
                                                          size_t proto_len,
                                                          TF_Status* status);

=head2 TF_FunctionGetAttrValueProto

=over 2

  Sets `output_attr_value` to the binary-serialized AttrValue proto
  representation of the value of the `attr_name` attr of `func`.
  If `attr_name` attribute is not present, status is set to an error.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_FunctionGetAttrValueProto(
      TF_Function* func, const char* attr_name, TF_Buffer* output_attr_value,
      TF_Status* status);

=head2 TF_DeleteFunction

=over 2

  Frees the memory used by the `func` struct.
  TF_DeleteFunction is a noop if `func` is null.
  Deleting a function does not remove it from any graphs it was copied to.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteFunction(TF_Function* func);

=head2 TF_TryEvaluateConstant

=over 2

  Attempts to evaluate `output`. This will only be possible if `output` doesn't
  depend on any graph inputs (this function is safe to call if this isn't the
  case though).
  
  If the evaluation is successful, this function returns true and `output`s
  value is returned in `result`. Otherwise returns false. An error status is
  returned if something is wrong with the graph or input. Note that this may
  return false even if no error status is set.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern unsigned char TF_TryEvaluateConstant(TF_Graph* graph,
                                                             TF_Output output,
                                                             TF_Tensor** result,
                                                             TF_Status* status);

=head2 TF_NewSession

=over 2

  Return a new execution session with the associated graph, or NULL on
  error. Does not take ownership of any input parameters.
  
  *`graph` must be a valid graph (not deleted or nullptr). `graph` will be
  kept alive for the lifetime of the returned TF_Session. New nodes can still
  be added to `graph` after this call.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Session* TF_NewSession(TF_Graph* graph,
                                                  const TF_SessionOptions* opts,
                                                  TF_Status* status);

=head2 TF_LoadSessionFromSavedModel

=over 2

  This function creates a new TF_Session (which is created on success) using
  `session_options`, and then initializes state (restoring tensors and other
  assets) using `run_options`.
  
  Any NULL and non-NULL value combinations for (`run_options, `meta_graph_def`)
  are valid.
  
  - `export_dir` must be set to the path of the exported SavedModel.
  - `tags` must include the set of tags used to identify one MetaGraphDef in
     the SavedModel.
  - `graph` must be a graph newly allocated with TF_NewGraph().
  
  If successful, populates `graph` with the contents of the Graph and
  `meta_graph_def` with the MetaGraphDef of the loaded model.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Session* TF_LoadSessionFromSavedModel(
      const TF_SessionOptions* session_options, const TF_Buffer* run_options,
      const char* export_dir, const char* const* tags, int tags_len,
      TF_Graph* graph, TF_Buffer* meta_graph_def, TF_Status* status);

=head2 TF_CloseSession

=over 2

  Close a session.
  
  Contacts any other processes associated with the session, if applicable.
  May not be called after TF_DeleteSession().

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_CloseSession(TF_Session*, TF_Status* status);

=head2 TF_DeleteSession

=over 2

  Destroy a session object.
  
  Even if error information is recorded in *status, this call discards all
  local resources associated with the session.  The session may not be used
  during or after this call (and the session drops its reference to the
  corresponding graph).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSession(TF_Session*, TF_Status* status);

=head2 TF_SessionRun

=over 2

  Run the graph associated with the session starting with the supplied inputs
  (inputs[0,ninputs-1] with corresponding values in input_values[0,ninputs-1]).
  
  Any NULL and non-NULL value combinations for (`run_options`,
  `run_metadata`) are valid.
  
     - `run_options` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to a `TF_Buffer` containing the
       serialized representation of a `RunOptions` protocol buffer.
     - `run_metadata` may be NULL, in which case it will be ignored; or
       non-NULL, in which case it must point to an empty, freshly allocated
       `TF_Buffer` that may be updated to contain the serialized representation
       of a `RunMetadata` protocol buffer.
  
  The caller retains ownership of `input_values` (which can be deleted using
  TF_DeleteTensor). The caller also retains ownership of `run_options` and/or
  `run_metadata` (when not NULL) and should manually call TF_DeleteBuffer on
  them.
  
  On success, the tensors corresponding to outputs[0,noutputs-1] are placed in
  output_values[]. Ownership of the elements of output_values[] is transferred
  to the caller, which must eventually call TF_DeleteTensor on them.
  
  On failure, output_values[] contains NULLs.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SessionRun(
      TF_Session* session,
      // RunOptions
      const TF_Buffer* run_options,
      // Input tensors
      const TF_Output* inputs, TF_Tensor* const* input_values, int ninputs,
      // Output tensors
      const TF_Output* outputs, TF_Tensor** output_values, int noutputs,
      // Target operations
      const TF_Operation* const* target_opers, int ntargets,
      // RunMetadata
      TF_Buffer* run_metadata,
      // Output status
      TF_Status*);

=head2 TF_SessionPRunSetup

=over 2

  Set up the graph with the intended feeds (inputs) and fetches (outputs) for a
  sequence of partial run calls.
  
  On success, returns a handle that is used for subsequent PRun calls. The
  handle should be deleted with TF_DeletePRunHandle when it is no longer
  needed.
  
  On failure, out_status contains a tensorflow::Status with an error
  message. *handle is set to nullptr.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SessionPRunSetup(
      TF_Session*,
      // Input names
      const TF_Output* inputs, int ninputs,
      // Output names
      const TF_Output* outputs, int noutputs,
      // Target operations
      const TF_Operation* const* target_opers, int ntargets,
      // Output handle
      const char** handle,
      // Output status
      TF_Status*);

=head2 TF_SessionPRun

=over 2

  Continue to run the graph with additional feeds and fetches. The
  execution state is uniquely identified by the handle.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_SessionPRun(
      TF_Session*, const char* handle,
      // Input tensors
      const TF_Output* inputs, TF_Tensor* const* input_values, int ninputs,
      // Output tensors
      const TF_Output* outputs, TF_Tensor** output_values, int noutputs,
      // Target operations
      const TF_Operation* const* target_opers, int ntargets,
      // Output status
      TF_Status*);

=head2 TF_DeletePRunHandle

=over 2

  Deletes a handle allocated by TF_SessionPRunSetup.
  Once called, no more calls to TF_SessionPRun should be made.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeletePRunHandle(const char* handle);

=head2 TF_NewDeprecatedSession

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_DeprecatedSession* TF_NewDeprecatedSession(
      const TF_SessionOptions*, TF_Status* status);

=head2 TF_CloseDeprecatedSession

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_CloseDeprecatedSession(TF_DeprecatedSession*,
                                                       TF_Status* status);

=head2 TF_DeleteDeprecatedSession

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteDeprecatedSession(TF_DeprecatedSession*,
                                                        TF_Status* status);

=head2 TF_Reset

=over 2

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_Reset(const TF_SessionOptions* opt,
                                      const char** containers, int ncontainers,
                                      TF_Status* status);

=head2 TF_ExtendGraph

=over 2

  Treat the bytes proto[0,proto_len-1] as a serialized GraphDef and
  add the nodes in that GraphDef to the graph for the session.
  
  Prefer use of TF_Session and TF_GraphImportGraphDef over this.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ExtendGraph(TF_DeprecatedSession*,
                                            const void* proto, size_t proto_len,
                                            TF_Status*);

=head2 TF_Run

=over 2

  See TF_SessionRun() above.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_Run(TF_DeprecatedSession*,
                                    const TF_Buffer* run_options,
                                    const char** input_names, TF_Tensor** inputs,
                                    int ninputs, const char** output_names,
                                    TF_Tensor** outputs, int noutputs,
                                    const char** target_oper_names, int ntargets,
                                    TF_Buffer* run_metadata, TF_Status*);

=head2 TF_PRunSetup

=over 2

  See TF_SessionPRunSetup() above.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_PRunSetup(TF_DeprecatedSession*,
                                          const char** input_names, int ninputs,
                                          const char** output_names, int noutputs,
                                          const char** target_oper_names,
                                          int ntargets, const char** handle,
                                          TF_Status*);

=head2 TF_PRun

=over 2

  See TF_SessionPRun above.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_PRun(TF_DeprecatedSession*, const char* handle,
                                     const char** input_names, TF_Tensor** inputs,
                                     int ninputs, const char** output_names,
                                     TF_Tensor** outputs, int noutputs,
                                     const char** target_oper_names, int ntargets,
                                     TF_Status*);

=head2 TF_SessionListDevices

=over 2

  Lists all devices in a TF_Session.
  
  Caller takes ownership of the returned TF_DeviceList* which must eventually
  be freed with a call to TF_DeleteDeviceList.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_DeviceList* TF_SessionListDevices(TF_Session* session,
                                                             TF_Status* status);

=head2 TF_DeprecatedSessionListDevices

=over 2

  Lists all devices in a TF_Session.
  
  Caller takes ownership of the returned TF_DeviceList* which must eventually
  be freed with a call to TF_DeleteDeviceList.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_DeviceList* TF_DeprecatedSessionListDevices(
      TF_DeprecatedSession* session, TF_Status* status);

=head2 TF_DeleteDeviceList

=over 2

  Deallocates the device list.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteDeviceList(TF_DeviceList* list);

=head2 TF_DeviceListCount

=over 2

  Counts the number of elements in the device list.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int TF_DeviceListCount(const TF_DeviceList* list);

=head2 TF_DeviceListName

=over 2

  Retrieves the full name of the device (e.g. /job:worker/replica:0/...)
  The return value will be a pointer to a null terminated string. The caller
  must not modify or delete the string. It will be deallocated upon a call to
  TF_DeleteDeviceList.
  
  If index is out of bounds, an error code will be set in the status object,
  and a null pointer will be returned.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_DeviceListName(const TF_DeviceList* list,
                                                      int index,
                                                      TF_Status* status);

=head2 TF_DeviceListType

=over 2

  Retrieves the type of the device at the given index.
  
  The caller must not modify or delete the string. It will be deallocated upon
  a call to TF_DeleteDeviceList.
  
  If index is out of bounds, an error code will be set in the status object,
  and a null pointer will be returned.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_DeviceListType(const TF_DeviceList* list,
                                                      int index,
                                                      TF_Status* status);

=head2 TF_DeviceListMemoryBytes

=over 2

  Retrieve the amount of memory associated with a given device.
  
  If index is out of bounds, an error code will be set in the status object,
  and -1 will be returned.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern int64_t TF_DeviceListMemoryBytes(
      const TF_DeviceList* list, int index, TF_Status* status);

=head2 TF_DeviceListIncarnation

=over 2

  Retrieve the incarnation number of a given device.
  
  If index is out of bounds, an error code will be set in the status object,
  and 0 will be returned.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern uint64_t TF_DeviceListIncarnation(
      const TF_DeviceList* list, int index, TF_Status* status);

=head2 TF_LoadLibrary

=over 2

  Load the library specified by library_filename and register the ops and
  kernels present in that library.
  
  Pass "library_filename" to a platform-specific mechanism for dynamically
  loading a library. The rules for determining the exact location of the
  library are platform-specific and are not documented here.
  
  On success, place OK in status and return the newly created library handle.
  The caller owns the library handle.
  
  On failure, place an error status in status and return NULL.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Library* TF_LoadLibrary(const char* library_filename,
                                                   TF_Status* status);

=head2 TF_GetOpList

=over 2

  Get the OpList of OpDefs defined in the library pointed by lib_handle.
  
  Returns a TF_Buffer. The memory pointed to by the result is owned by
  lib_handle. The data in the buffer will be the serialized OpList proto for
  ops defined in the library.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer TF_GetOpList(TF_Library* lib_handle);

=head2 TF_DeleteLibraryHandle

=over 2

  Frees the memory associated with the library handle.
  Does NOT unload the library.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteLibraryHandle(TF_Library* lib_handle);

=head2 TF_GetAllOpList

=over 2

  Get the OpList of all OpDefs defined in this address space.
  Returns a TF_Buffer, ownership of which is transferred to the caller
  (and can be freed using TF_DeleteBuffer).
  
  The data in the buffer will be the serialized OpList proto for ops registered
  in this address space.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_GetAllOpList(void);

=head2 TF_NewApiDefMap

=over 2

  Creates a new TF_ApiDefMap instance.
  
  Params:
   op_list_buffer - TF_Buffer instance containing serialized OpList
     protocol buffer. (See
     https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto
     for the OpList proto definition).
   status - Set to OK on success and an appropriate error on failure.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_ApiDefMap* TF_NewApiDefMap(TF_Buffer* op_list_buffer,
                                                      TF_Status* status);

=head2 TF_DeleteApiDefMap

=over 2

  Deallocates a TF_ApiDefMap.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteApiDefMap(TF_ApiDefMap* apimap);

=head2 TF_ApiDefMapPut

=over 2

  Add ApiDefs to the map.
  
  `text` corresponds to a text representation of an ApiDefs protocol message.
  (https://www.tensorflow.org/code/tensorflow/core/framework/api_def.proto).
  
  The provided ApiDefs will be merged with existing ones in the map, with
  precedence given to the newly added version in case of conflicts with
  previous calls to TF_ApiDefMapPut.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ApiDefMapPut(TF_ApiDefMap* api_def_map,
                                             const char* text, size_t text_len,
                                             TF_Status* status);

=head2 TF_ApiDefMapGet

=over 2

  Returns a serialized ApiDef protocol buffer for the TensorFlow operation
  named `name`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_ApiDefMapGet(TF_ApiDefMap* api_def_map,
                                                   const char* name,
                                                   size_t name_len,
                                                   TF_Status* status);

=head2 TF_GetAllRegisteredKernels

=over 2

  Returns a serialized KernelList protocol buffer containing KernelDefs for all
  registered kernels.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_GetAllRegisteredKernels(TF_Status* status);

=head2 TF_GetRegisteredKernelsForOp

=over 2

  Returns a serialized KernelList protocol buffer containing KernelDefs for all
  kernels registered for the operation named `name`.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_GetRegisteredKernelsForOp(
      const char* name, TF_Status* status);

=head2 TF_UpdateEdge

=over 2

  Update edge, switch input/ output in a node

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_UpdateEdge(TF_Graph* graph, TF_Output new_src,
                                           TF_Input dst, TF_Status* status);

=head2 TF_NewServer

=over 2

  Creates a new in-process TensorFlow server configured using a serialized
  ServerDef protocol buffer provided via `proto` and `proto_len`.
  
  The server will not serve any requests until TF_ServerStart is invoked.
  The server will stop serving requests once TF_ServerStop or
  TF_DeleteServer is invoked.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern TF_Server* TF_NewServer(const void* proto,
                                                size_t proto_len,
                                                TF_Status* status);

=head2 TF_ServerStart

=over 2

  Starts an in-process TensorFlow server.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ServerStart(TF_Server* server, TF_Status* status);

=head2 TF_ServerStop

=over 2

  Stops an in-process TensorFlow server.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ServerStop(TF_Server* server, TF_Status* status);

=head2 TF_ServerJoin

=over 2

  Blocks until the server has been successfully stopped (via TF_ServerStop or
  TF_ServerClose).

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_ServerJoin(TF_Server* server, TF_Status* status);

=head2 TF_ServerTarget

=over 2

  Returns the target string that can be provided to TF_SetTarget() to connect
  a TF_Session to `server`.
  
  The returned string is valid only until TF_DeleteServer is invoked.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern const char* TF_ServerTarget(TF_Server* server);

=head2 TF_DeleteServer

=over 2

  Destroy an in-process TensorFlow server, frees memory. If server is running
  it will be stopped and joined.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteServer(TF_Server* server);

=head2 TF_RegisterLogListener

=over 2

  Register a listener method that processes printed messages.
  
  If any listeners are registered, the print operator will call all listeners
  with the printed messages and immediately return without writing to the
  logs.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_RegisterLogListener(
      void (*listener)(const char*));

=head2 TF_RegisterFilesystemPlugin

=over 2

  Register a FileSystem plugin from filename `plugin_filename`.
  
  On success, place OK in status.
  On failure, place an error status in status.

=back

  /* From <tensorflow/c/c_api.h> */
  TF_CAPI_EXPORT extern void TF_RegisterFilesystemPlugin(
      const char* plugin_filename, TF_Status* status);

=head2 TF_NewShape

=over 2

  Return a new, unknown rank shape object. The caller is responsible for
  calling TF_DeleteShape to deallocate and destroy the returned shape.

=back

  /* From <tensorflow/c/tf_shape.h> */
  TF_CAPI_EXPORT extern TF_Shape* TF_NewShape();

=head2 TF_ShapeDims

=over 2

  Returns the rank of `shape`. If `shape` has unknown rank, returns -1.

=back

  /* From <tensorflow/c/tf_shape.h> */
  TF_CAPI_EXPORT extern int TF_ShapeDims(const TF_Shape* shape);

=head2 TF_ShapeDimSize

=over 2

  Returns the `d`th dimension of `shape`. If `shape` has unknown rank,
  invoking this function is undefined behavior. Returns -1 if dimension is
  unknown.

=back

  /* From <tensorflow/c/tf_shape.h> */
  TF_CAPI_EXPORT extern int64_t TF_ShapeDimSize(const TF_Shape* shape, int d);

=head2 TF_DeleteShape

=over 2

  Deletes `shape`.

=back

  /* From <tensorflow/c/tf_shape.h> */
  TF_CAPI_EXPORT extern void TF_DeleteShape(TF_Shape* shape);

=head2 TF_NewTensor

=over 2

  Return a new tensor that holds the bytes data[0,len-1].
  
  The data will be deallocated by a subsequent call to TF_DeleteTensor via:
       (*deallocator)(data, len, deallocator_arg)
  Clients must provide a custom deallocator function so they can pass in
  memory managed by something like numpy.
  
  May return NULL (and invoke the deallocator) if the provided data buffer
  (data, len) is inconsistent with a tensor of the given TF_DataType
  and the shape specified by (dima, num_dims).

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_NewTensor(
      TF_DataType, const int64_t* dims, int num_dims, void* data, size_t len,
      void (*deallocator)(void* data, size_t len, void* arg),
      void* deallocator_arg);

=head2 TF_AllocateTensor

=over 2

  Allocate and return a new Tensor.
  
  This function is an alternative to TF_NewTensor and should be used when
  memory is allocated to pass the Tensor to the C API. The allocated memory
  satisfies TensorFlow's memory alignment preferences and should be preferred
  over calling malloc and free.
  
  The caller must set the Tensor values by writing them to the pointer returned
  by TF_TensorData with length TF_TensorByteSize.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_AllocateTensor(TF_DataType,
                                                     const int64_t* dims,
                                                     int num_dims, size_t len);

=head2 TF_TensorMaybeMove

=over 2

  Deletes `tensor` and returns a new TF_Tensor with the same content if
  possible. Returns nullptr and leaves `tensor` untouched if not.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_TensorMaybeMove(TF_Tensor* tensor);

=head2 TF_DeleteTensor

=over 2

  Destroy a tensor.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern void TF_DeleteTensor(TF_Tensor*);

=head2 TF_TensorType

=over 2

  Return the type of a tensor element.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_TensorType(const TF_Tensor*);

=head2 TF_SetShape

=over 2

  Set a new shape for the Tensor.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern void TF_SetShape(TF_Tensor* tensor, const int64_t* dims,
                                         int num_dims);

=head2 TF_NumDims

=over 2

  Return the number of dimensions that the tensor has.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern int TF_NumDims(const TF_Tensor*);

=head2 TF_Dim

=over 2

  Return the length of the tensor in the "dim_index" dimension.
  REQUIRES: 0 <= dim_index < TF_NumDims(tensor)

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern int64_t TF_Dim(const TF_Tensor* tensor, int dim_index);

=head2 TF_TensorByteSize

=over 2

  Return the size of the underlying data in bytes.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern size_t TF_TensorByteSize(const TF_Tensor*);

=head2 TF_TensorData

=over 2

  Return a pointer to the underlying data buffer.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern void* TF_TensorData(const TF_Tensor*);

=head2 TF_TensorElementCount

=over 2

  Returns the number of elements in the tensor.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern int64_t TF_TensorElementCount(const TF_Tensor* tensor);

=head2 TF_TensorBitcastFrom

=over 2

  Copy the internal data representation of `from` to `to`. `new_dims` and
  `num_new_dims` specify the new shape of the `to` tensor, `type` specifies its
  data type. On success, *status is set to TF_OK and the two tensors share the
  same data buffer.
  
  This call requires that the `from` tensor and the given type and shape (dims
  and num_dims) are "compatible" (i.e. they occupy the same number of bytes).
  Specifically, given from_type_size = TF_DataTypeSize(TF_TensorType(from)):
  
  ShapeElementCount(dims, num_dims) * TF_DataTypeSize(type)
  
  must equal
  
  TF_TensorElementCount(from) * from_type_size
  
  where TF_ShapeElementCount would be the number of elements in a tensor with
  the given shape.
  
  In addition, this function requires:
    * TF_DataTypeSize(TF_TensorType(from)) != 0
    * TF_DataTypeSize(type) != 0
  
  If any of the requirements are not met, *status is set to
  TF_INVALID_ARGUMENT.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern void TF_TensorBitcastFrom(const TF_Tensor* from,
                                                  TF_DataType type, TF_Tensor* to,
                                                  const int64_t* new_dims,
                                                  int num_new_dims,
                                                  TF_Status* status);

=head2 TF_TensorIsAligned

=over 2

  Returns bool iff this tensor is aligned.

=back

  /* From <tensorflow/c/tf_tensor.h> */
  TF_CAPI_EXPORT extern bool TF_TensorIsAligned(const TF_Tensor*);

=head2 TF_NewStatus

=over 2

  Return a new status object.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern TF_Status* TF_NewStatus(void);

=head2 TF_DeleteStatus

=over 2

  Delete a previously created status object.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern void TF_DeleteStatus(TF_Status*);

=head2 TF_SetStatus

=over 2

  Record <code, msg> in *s.  Any previous information is lost.
  A common use is to clear a status: TF_SetStatus(s, TF_OK, "");

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern void TF_SetStatus(TF_Status* s, TF_Code code,
                                          const char* msg);

=head2 TF_SetPayload

=over 2

  Record <key, value> as a payload in *s. The previous payload having the
  same key (if any) is overwritten. Payload will not be added if the Status
  is OK.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT void TF_SetPayload(TF_Status* s, const char* key,
                                    const char* value);

=head2 TF_SetStatusFromIOError

=over 2

  Convert from an I/O error code (e.g., errno) to a TF_Status value.
  Any previous information is lost. Prefer to use this instead of TF_SetStatus
  when the error comes from I/O operations.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern void TF_SetStatusFromIOError(TF_Status* s, int error_code,
                                                     const char* context);

=head2 TF_GetCode

=over 2

  Return the code record in *s.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern TF_Code TF_GetCode(const TF_Status* s);

=head2 TF_Message

=over 2

  Return a pointer to the (null-terminated) error message in *s.  The
  return value points to memory that is only usable until the next
  mutation to *s.  Always returns an empty string if TF_GetCode(s) is
  TF_OK.

=back

  /* From <tensorflow/c/tf_status.h> */
  TF_CAPI_EXPORT extern const char* TF_Message(const TF_Status* s);

=head2 TF_NewBufferFromString

=over 2

  Makes a copy of the input and sets an appropriate deallocator.  Useful for
  passing in read-only, input protobufs.

=back

  /* From <tensorflow/c/tf_buffer.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_NewBufferFromString(const void* proto,
                                                          size_t proto_len);

=head2 TF_NewBuffer

=over 2

  Useful for passing *out* a protobuf.

=back

  /* From <tensorflow/c/tf_buffer.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_NewBuffer(void);

=head2 TF_DeleteBuffer

=over 2

=back

  /* From <tensorflow/c/tf_buffer.h> */
  TF_CAPI_EXPORT extern void TF_DeleteBuffer(TF_Buffer*);

=head2 TF_GetBuffer

=over 2

=back

  /* From <tensorflow/c/tf_buffer.h> */
  TF_CAPI_EXPORT extern TF_Buffer TF_GetBuffer(TF_Buffer* buffer);

=head2 TF_StringInit

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern void TF_StringInit(TF_TString *t);

=head2 TF_StringCopy

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern void TF_StringCopy(TF_TString *dst, const char *src,
                                           size_t size);

=head2 TF_StringAssignView

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern void TF_StringAssignView(TF_TString *dst, const char *src,
                                                 size_t size);

=head2 TF_StringGetDataPointer

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern const char *TF_StringGetDataPointer(
      const TF_TString *tstr);

=head2 TF_StringGetType

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern TF_TString_Type TF_StringGetType(const TF_TString *str);

=head2 TF_StringGetSize

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern size_t TF_StringGetSize(const TF_TString *tstr);

=head2 TF_StringGetCapacity

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern size_t TF_StringGetCapacity(const TF_TString *str);

=head2 TF_StringDealloc

=over 2

=back

  /* From <tensorflow/c/tf_tstring.h> */
  TF_CAPI_EXPORT extern void TF_StringDealloc(TF_TString *tstr);

=head2 TF_DataTypeSize

=over 2

  TF_DataTypeSize returns the sizeof() for the underlying type corresponding
  to the given TF_DataType enum value. Returns 0 for variable length types
  (eg. TF_STRING) or on failure.

=back

  /* From <tensorflow/c/tf_datatype.h> */
  TF_CAPI_EXPORT extern size_t TF_DataTypeSize(TF_DataType dt);

=head2 TF_NewOpDefinitionBuilder

=over 2

  Returns a newly allocated op definition builder for the given op name. The
  returned builder may be customized with the `TF_OpDefinitionBuilder...`
  functions and then registered with TensorFlow with TF_RegisterOpDefinition.
  
  The returned pointer is either freed by a call to TF_RegisterOpDefinition, or
  can be manually deleted by TF_DeleteOpDefinitionBuilder if it is never
  registered.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_OpDefinitionBuilder* TF_NewOpDefinitionBuilder(
      const char* op_name);

=head2 TF_RegisterOpDefinition

=over 2

  Registers the given op builder with TensorFlow. Indicates success or
  otherwise in the given status.
  
  `builder` is freed whether the op was successfully registered or not. You
  must call either this function or TF_DeleteOpDefinitionBuilder to free the
  builder, but never both.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_RegisterOpDefinition(
      TF_OpDefinitionBuilder* builder, TF_Status* status);

=head2 TF_DeleteOpDefinitionBuilder

=over 2

  Frees the given op definition builder. You must call either this function or
  TF_RegisterOpDefinition to free the builder, but never both.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_DeleteOpDefinitionBuilder(
      TF_OpDefinitionBuilder* builder);

=head2 TF_OpDefinitionBuilderAddAttr

=over 2

  Adds an attr to the given TF_OpDefinitionBuilder. The spec has
  format "<name>:<type>" or "<name>:<type>=<default>"
  where <name> matches regexp [a-zA-Z][a-zA-Z0-9_]*.
  By convention, names containing only capital letters are reserved for
  attributes whose values can be inferred by the operator implementation if not
  supplied by the user. If the attribute name contains characters other than
  capital letters, the operator expects the user to provide the attribute value
  at operation runtime.
  
  <type> can be:
    "string", "int", "float", "bool", "type", "shape", or "tensor"
    "numbertype", "realnumbertype", "quantizedtype"
        (meaning "type" with a restriction on valid values)
    "{int32,int64}" or {realnumbertype,quantizedtype,string}"
        (meaning "type" with a restriction containing unions of value types)
    "{\"foo\", \"bar\n baz\"}", or "{'foo', 'bar\n baz'}"
        (meaning "string" with a restriction on valid values)
    "list(string)", ..., "list(tensor)", "list(numbertype)", ...
        (meaning lists of the above types)
    "int >= 2" (meaning "int" with a restriction on valid values)
    "list(string) >= 2", "list(int) >= 2"
        (meaning "list(string)" / "list(int)" with length at least 2)
  <default>, if included, should use the Proto text format
  of <type>.  For lists use [a, b, c] format.
  
  Note that any attr specifying the length of an input or output will
  get a default minimum of 1 unless the >= # syntax is used.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderAddAttr(
      TF_OpDefinitionBuilder* builder, const char* attr_spec);

=head2 TF_OpDefinitionBuilderAddInput

=over 2

  Adds an input to this TF_OpDefinitionBuilder.
  The spec has form "<name>:<type-expr>" or "<name>:Ref(<type-expr>)"
  where <name> matches regexp [a-z][a-z0-9_]* and <type-expr> can be:
  * For a single tensor: <type>
  * For a sequence of tensors with the same type: <number>*<type>
  * For a sequence of tensors with different types: <type-list>
  Where:
    <type> is either one of "float", "int32", "string", ...
           or the name of an attr (see TF_OpDefinitionBuilderAddAttr)
           with type "type".
    <number> is the name of an attr with type "int".
    <type-list> is the name of an attr with type "list(type)".

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderAddInput(
      TF_OpDefinitionBuilder* builder, const char* input_spec);

=head2 TF_OpDefinitionBuilderAddOutput

=over 2

  Adds an output to this TF_OpDefinitionBuilder.
  The spec has form "<name>:<type-expr>" or "<name>:Ref(<type-expr>)"
  where <name> matches regexp [a-z][a-z0-9_]* and <type-expr> can be:
  * For a single tensor: <type>
  * For a sequence of tensors with the same type: <number>*<type>
  * For a sequence of tensors with different types: <type-list>
  Where:
    <type> is either one of "float", "int32", "string", ...
           or the name of an attr (see TF_OpDefinitionBuilderAddAttr)
           with type "type".
    <number> is the name of an attr with type "int".
    <type-list> is the name of an attr with type "list(type)".

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderAddOutput(
      TF_OpDefinitionBuilder* builder, const char* output_spec);

=head2 TF_OpDefinitionBuilderSetIsCommutative

=over 2

  Sets the commutative property for the op built by the given builder.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetIsCommutative(
      TF_OpDefinitionBuilder* builder, bool is_commutative);

=head2 TF_OpDefinitionBuilderSetIsAggregate

=over 2

  Sets the is_aggregate property of the builder to the given value.
  
  If is_aggregate is true, then the operation produced by this builder accepts
  N >= 2 inputs and produces 1 output all of the same type. Should be
  associative and commutative, and produce output with the same shape as the
  input. The optimizer may replace an aggregate op taking input from multiple
  devices with a tree of aggregate ops that aggregate locally within each
  device (and possibly within groups of nearby devices) before communicating.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetIsAggregate(
      TF_OpDefinitionBuilder* builder, bool is_aggregate);

=head2 TF_OpDefinitionBuilderSetIsStateful

=over 2

  Sets the is_stateful property of the builder to the given value.
  
  The op built by this builder is stateful if its behavior depends on some
  state beyond its input tensors (e.g. variable reading op) or if it has a
  side-effect (e.g. printing or asserting ops). Equivalently, stateless ops
  must always produce the same output for the same input and have no
  side-effects.
  
  By default Ops may be moved between devices. Stateful ops should either not
  be moved, or should only be moved if that state can also be moved (e.g. via
  some sort of save / restore). Stateful ops are guaranteed to never be
  optimized away by Common Subexpression Elimination (CSE).

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetIsStateful(
      TF_OpDefinitionBuilder* builder, bool is_stateful);

=head2 TF_OpDefinitionBuilderSetAllowsUninitializedInput

=over 2

  Sets the allows_uninitialized_input property of the operation built by this
  builder.
  
  By default, all inputs to an Op must be initialized Tensors. Ops that may
  initialize tensors for the first time should set this field to true, to allow
  the Op to take an uninitialized Tensor as input.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetAllowsUninitializedInput(
      TF_OpDefinitionBuilder* builder, bool allows_uninitialized_input);

=head2 TF_OpDefinitionBuilderDeprecated

=over 2

  Adds a deprecation warning for the given op. This indicates to the user that
  `version` is the first TensorFlow GraphDef version for which the operation is
  deprecated. `explanation` should contain the reason for the deprecation and
  what to use instead.
  
  This function is only an indicator that the operation may disappear in a
  version of TensorFlow after `version`. It does not affect op registration.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderDeprecated(
      TF_OpDefinitionBuilder* builder, int version, const char* explanation);

=head2 TF_OpDefinitionBuilderSetShapeInferenceFunction

=over 2

  Sets the shape inference function for the op.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_OpDefinitionBuilderSetShapeInferenceFunction(
      TF_OpDefinitionBuilder* builder,
      void (*shape_inference_func)(TF_ShapeInferenceContext* ctx,
                                   TF_Status* status));

=head2 TF_ShapeInferenceContextNumInputs

=over 2

  Returns the number of inputs in the given shape inference context.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextNumInputs(
      TF_ShapeInferenceContext* ctx);

=head2 TF_NewShapeHandle

=over 2

  Returns a newly allocated shape handle. The shapes represented by these
  handles may be queried or mutated with the corresponding
  TF_ShapeInferenceContext...  functions.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_NewShapeHandle();

=head2 TF_ShapeInferenceContextGetInput

=over 2

  Places the ith input of the given shape inference context into the given
  shape handle, or returns a status other than TF_OK indicating why the input
  could not be retrieved
  (for example, if i < 0 || i >= TF_ShapeInferenceContextNumInputs(ctx)).

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextGetInput(
      TF_ShapeInferenceContext* ctx, int i, TF_ShapeHandle* handle,
      TF_Status* status);

=head2 TF_ShapeInferenceContextSetOutput

=over 2

  Places the given shape handle into the `i`th output position of the given
  context. Internally, the shape handle is copied; the caller may subsequently
  delete `handle`.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT
  extern void TF_ShapeInferenceContextSetOutput(TF_ShapeInferenceContext* ctx,
                                                int i, TF_ShapeHandle* handle,
                                                TF_Status* status);

=head2 TF_ShapeInferenceContextScalar

=over 2

  Returns a newly-allocated scalar shape handle. The returned handle should
  be freed with TF_DeleteShapeHandle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextScalar(
      TF_ShapeInferenceContext* ctx);

=head2 TF_ShapeInferenceContextVectorFromSize

=over 2

  Returns a newly-allocate shape handle representing a vector of the given
  size. The returned handle should be freed with TF_DeleteShapeHandle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_ShapeHandle* TF_ShapeInferenceContextVectorFromSize(
      TF_ShapeInferenceContext* ctx, size_t size);

=head2 TF_NewDimensionHandle

=over 2

  Returns a newly allocated dimension handle. It must be freed with
  TF_DeleteDimensionHandle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern TF_DimensionHandle* TF_NewDimensionHandle();

=head2 TF_ShapeInferenceContext_GetAttrType

=over 2

  Interprets the named shape inference context attribute as a TF_DataType and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  TF_DataType, *status is populated with an error.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContext_GetAttrType(
      TF_ShapeInferenceContext* ctx, const char* attr_name, TF_DataType* val,
      TF_Status* status);

=head2 TF_ShapeInferenceContextRank

=over 2

  Returns the rank of the shape represented by the given handle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_ShapeInferenceContextRank(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);

=head2 TF_ShapeInferenceContextRankKnown

=over 2

  Returns 1 if `handle` has a known rank, 0 otherwise.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int TF_ShapeInferenceContextRankKnown(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle);

=head2 TF_ShapeInferenceContextWithRank

=over 2

  If <handle> has rank <rank>, or its rank is unknown, return OK and return the
  shape with asserted rank in <*result>. Otherwise an error is placed into
  `status`.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRank(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);

=head2 TF_ShapeInferenceContextWithRankAtLeast

=over 2

  If <handle> has rank at least <rank>, or its rank is unknown, return OK and
  return the shape with asserted rank in <*result>. Otherwise an error is
  placed into `status`.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtLeast(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);

=head2 TF_ShapeInferenceContextWithRankAtMost

=over 2

  If <handle> has rank at most <rank>, or its rank is unknown, return OK and
  return the shape with asserted rank in <*result>. Otherwise an error is
  placed into `status`.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextWithRankAtMost(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* handle, int64_t rank,
      TF_ShapeHandle* result, TF_Status* status);

=head2 TF_ShapeInferenceContextDim

=over 2

  Places a handle to the ith dimension of the given shape into *result.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextDim(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t i,
      TF_DimensionHandle* result);

=head2 TF_ShapeInferenceContextSubshape

=over 2

  Returns in <*result> a sub-shape of <shape_handle>, with dimensions
  [start:end]. <start> and <end> can be negative, to index from the end of the
  shape. <start> and <end> are set to the rank of <shape_handle> if > rank of
  <shape_handle>.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSubshape(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* shape_handle, int64_t start,
      int64_t end, TF_ShapeHandle* result, TF_Status* status);

=head2 TF_ShapeInferenceContextSetUnknownShape

=over 2

  Places an unknown shape in all outputs for the given inference context. Used
  for shape inference functions with ops whose output shapes are unknown.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextSetUnknownShape(
      TF_ShapeInferenceContext* ctx, TF_Status* status);

=head2 TF_DimensionHandleValueKnown

=over 2

  Returns whether the given handle represents a known dimension.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int TF_DimensionHandleValueKnown(
      TF_DimensionHandle* dim_handle);

=head2 TF_DimensionHandleValue

=over 2

  Returns the value of the given dimension.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern int64_t TF_DimensionHandleValue(
      TF_DimensionHandle* dim_handle);

=head2 TF_ShapeInferenceContextConcatenateShapes

=over 2

  Returns in <*result> the result of appending the dimensions of <second> to
  those of <first>.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_ShapeInferenceContextConcatenateShapes(
      TF_ShapeInferenceContext* ctx, TF_ShapeHandle* first,
      TF_ShapeHandle* second, TF_ShapeHandle* result, TF_Status* status);

=head2 TF_DeleteShapeHandle

=over 2

  Frees the given shape handle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_DeleteShapeHandle(TF_ShapeHandle* handle);

=head2 TF_DeleteDimensionHandle

=over 2

  Frees the given dimension handle.

=back

  /* From <tensorflow/c/ops.h> */
  TF_CAPI_EXPORT extern void TF_DeleteDimensionHandle(TF_DimensionHandle* handle);

=head2 TF_CreateDir

=over 2

  Creates the specified directory. Typical status code are:
   * TF_OK - successfully created the directory
   * TF_ALREADY_EXISTS - directory already exists
   * TF_PERMISSION_DENIED - dirname is not writable

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_CreateDir(const char* dirname, TF_Status* status);

=head2 TF_DeleteDir

=over 2

  Deletes the specified directory. Typical status codes are:
   * TF_OK - successfully deleted the directory
   * TF_FAILED_PRECONDITION - the directory is not empty

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_DeleteDir(const char* dirname, TF_Status* status);

=head2 TF_DeleteRecursively

=over 2

  Deletes the specified directory and all subdirectories and files underneath
  it. This is accomplished by traversing the directory tree rooted at dirname
  and deleting entries as they are encountered.
  
  If dirname itself is not readable or does not exist, *undeleted_dir_count is
  set to 1, *undeleted_file_count is set to 0 and an appropriate status (e.g.
  TF_NOT_FOUND) is returned.
  
  If dirname and all its descendants were successfully deleted, TF_OK is
  returned and both error counters are set to zero.
  
  Otherwise, while traversing the tree, undeleted_file_count and
  undeleted_dir_count are updated if an entry of the corresponding type could
  not be deleted. The returned error status represents the reason that any one
  of these entries could not be deleted.
  
  Typical status codes:
   * TF_OK - dirname exists and we were able to delete everything underneath
   * TF_NOT_FOUND - dirname doesn't exist
   * TF_PERMISSION_DENIED - dirname or some descendant is not writable
   * TF_UNIMPLEMENTED - some underlying functions (like Delete) are not
     implemented

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_DeleteRecursively(const char* dirname,
                                                  int64_t* undeleted_file_count,
                                                  int64_t* undeleted_dir_count,
                                                  TF_Status* status);

=head2 TF_FileStat

=over 2

  Obtains statistics for the given path. If status is TF_OK, *stats is
  updated, otherwise it is not touched.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_FileStat(const char* filename,
                                         TF_FileStatistics* stats,
                                         TF_Status* status);

=head2 TF_NewWritableFile

=over 2

  Creates or truncates the given filename and returns a handle to be used for
  appending data to the file. If status is TF_OK, *handle is updated and the
  caller is responsible for freeing it (see TF_CloseWritableFile).

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_NewWritableFile(const char* filename,
                                                TF_WritableFileHandle** handle,
                                                TF_Status* status);

=head2 TF_CloseWritableFile

=over 2

  Closes the given handle and frees its memory. If there was a problem closing
  the file, it is indicated by status. Memory is freed in any case.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_CloseWritableFile(TF_WritableFileHandle* handle,
                                                  TF_Status* status);

=head2 TF_SyncWritableFile

=over 2

  Syncs content of the handle to the filesystem. Blocks waiting for the
  filesystem to indicate that the content has been persisted.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_SyncWritableFile(TF_WritableFileHandle* handle,
                                                 TF_Status* status);

=head2 TF_FlushWritableFile

=over 2

  Flush local buffers to the filesystem. If the process terminates after a
  successful flush, the contents may still be persisted, since the underlying
  filesystem may eventually flush the contents.  If the OS or machine crashes
  after a successful flush, the contents may or may not be persisted, depending
  on the implementation.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_FlushWritableFile(TF_WritableFileHandle* handle,
                                                  TF_Status* status);

=head2 TF_AppendWritableFile

=over 2

  Appends the given bytes to the file. Any failure to do so is indicated in
  status.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_AppendWritableFile(TF_WritableFileHandle* handle,
                                                   const char* data,
                                                   size_t length,
                                                   TF_Status* status);

=head2 TF_DeleteFile

=over 2

  Deletes the named file and indicates whether successful in *status.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_DeleteFile(const char* filename,
                                           TF_Status* status);

=head2 TF_StringStreamNext

=over 2

  Retrieves the next item from the given TF_StringStream and places a pointer
  to it in *result. If no more items are in the list, *result is set to NULL
  and false is returned.
  
  Ownership of the items retrieved with this function remains with the library.
  Item points are invalidated after a call to TF_StringStreamDone.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern bool TF_StringStreamNext(TF_StringStream* list,
                                                 const char** result);

=head2 TF_StringStreamDone

=over 2

  Frees the resources associated with given string list. All pointers returned
  by TF_StringStreamNext are invalid after this call.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_StringStreamDone(TF_StringStream* list);

=head2 TF_GetChildren

=over 2

  Retrieves the list of children of the given directory. You can iterate
  through the list with TF_StringStreamNext. The caller is responsible for
  freeing the list (see TF_StringStreamDone).

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_StringStream* TF_GetChildren(const char* filename,
                                                        TF_Status* status);

=head2 TF_GetLocalTempDirectories

=over 2

  Retrieves a list of directory names on the local machine that may be used for
  temporary storage. You can iterate through the list with TF_StringStreamNext.
  The caller is responsible for freeing the list (see TF_StringStreamDone).

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_StringStream* TF_GetLocalTempDirectories(void);

=head2 TF_GetTempFileName

=over 2

  Creates a temporary file name with an extension.
  The caller is responsible for freeing the returned pointer.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern char* TF_GetTempFileName(const char* extension);

=head2 TF_NowNanos

=over 2

  Returns the number of nanoseconds since the Unix epoch.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern uint64_t TF_NowNanos(void);

=head2 TF_NowMicros

=over 2

  Returns the number of microseconds since the Unix epoch.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern uint64_t TF_NowMicros(void);

=head2 TF_NowSeconds

=over 2

  Returns the number of seconds since the Unix epoch.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern uint64_t TF_NowSeconds(void);

=head2 TF_DefaultThreadOptions

=over 2

  Populates a TF_ThreadOptions struct with system-default values.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_DefaultThreadOptions(TF_ThreadOptions* options);

=head2 TF_StartThread

=over 2

  Returns a new thread that is running work_func and is identified
  (for debugging/performance-analysis) by thread_name.
  
  The given param (which may be null) is passed to work_func when the thread
  starts. In this way, data may be passed from the thread back to the caller.
  
  Caller takes ownership of the result and must call TF_JoinThread on it
  eventually.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern TF_Thread* TF_StartThread(const TF_ThreadOptions* options,
                                                  const char* thread_name,
                                                  void (*work_func)(void*),
                                                  void* param);

=head2 TF_JoinThread

=over 2

  Waits for the given thread to finish execution, then deletes it.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void TF_JoinThread(TF_Thread* thread);

=head2 TF_LoadSharedLibrary

=over 2

  \brief Load a dynamic library.
  
  Pass "library_filename" to a platform-specific mechanism for dynamically
  loading a library. The rules for determining the exact location of the
  library are platform-specific and are not documented here.
  
  On success, place OK in status and return the newly created library handle.
  Otherwise returns nullptr and set error status.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void* TF_LoadSharedLibrary(const char* library_filename,
                                                   TF_Status* status);

=head2 TF_GetSymbolFromLibrary

=over 2

  \brief Get a pointer to a symbol from a dynamic library.
  
  "handle" should be a pointer returned from a previous call to
  TF_LoadLibraryFromEnv. On success, place OK in status and return a pointer to
  the located symbol. Otherwise returns nullptr and set error status.

=back

  /* From <tensorflow/c/env.h> */
  TF_CAPI_EXPORT extern void* TF_GetSymbolFromLibrary(void* handle,
                                                      const char* symbol_name,
                                                      TF_Status* status);

=head2 TF_Log

=over 2

=back

  /* From <tensorflow/c/logging.h> */
  TF_CAPI_EXPORT extern void TF_Log(TF_LogLevel level, const char* fmt, ...);

=head2 TF_VLog

=over 2

=back

  /* From <tensorflow/c/logging.h> */
  TF_CAPI_EXPORT extern void TF_VLog(int level, const char* fmt, ...);

=head2 TF_DVLog

=over 2

=back

  /* From <tensorflow/c/logging.h> */
  TF_CAPI_EXPORT extern void TF_DVLog(int level, const char* fmt, ...);

=head2 TF_NewKernelBuilder

=over 2

  Allocates a new kernel builder and returns a pointer to it.
  
  If non-null, TensorFlow will call create_func when it needs to instantiate
  the kernel. The pointer returned by create_func will be passed to
  compute_func and delete_func, thereby functioning as a "this" pointer for
  referring to kernel instances.
  
  The TF_OpKernelConstruction pointer passed to create_func is owned by
  TensorFlow and will be deleted once create_func returns. It must not be used
  after this.
  
  When TensorFlow needs to perform a computation with this kernel, it will
  call compute_func. This function will receive the pointer returned by
  create_func (or null if no create_func was provided), along with the inputs
  to the computation.
  
  The TF_OpKernelContext pointer received by compute_func is owned by
  TensorFlow and will be deleted once compute_func returns. It must not be used
  after this.
  
  Finally, when TensorFlow no longer needs the kernel, it will call
  delete_func if one is provided. This function will receive the pointer
  returned in `create_func` or nullptr if no `create_func` was provided.
  
  The caller should pass the result of this function to
  TF_RegisterKernelBuilder, which will take ownership of the pointer. If, for
  some reason, the kernel builder will not be registered, the caller should
  delete it with TF_DeleteKernelBuilder.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_KernelBuilder* TF_NewKernelBuilder(
      const char* op_name, const char* device_name,
      void* (*create_func)(TF_OpKernelConstruction*),
      void (*compute_func)(void*, TF_OpKernelContext*),
      void (*delete_func)(void*));

=head2 TF_KernelBuilder_TypeConstraint

=over 2

  Specifies that this kernel's attribute only supports the given type.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_TypeConstraint(
      TF_KernelBuilder* kernel_builder, const char* attr_name,
      const TF_DataType type, TF_Status* status);

=head2 TF_KernelBuilder_HostMemory

=over 2

  Specify that this kernel requires/provides an input/output arg
  in host memory (instead of the default, device memory).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_HostMemory(
      TF_KernelBuilder* kernel_builder, const char* arg_name);

=head2 TF_KernelBuilder_Priority

=over 2

  Specify a priority number for this kernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_Priority(
      TF_KernelBuilder* kernel_builder, int32_t priority_number);

=head2 TF_KernelBuilder_Label

=over 2

  Specify a label for this kernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_KernelBuilder_Label(
      TF_KernelBuilder* kernel_builder, const char* label);

=head2 TF_RegisterKernelBuilder

=over 2

  Register the given kernel builder with the TensorFlow runtime. If
  registration fails, the given status will be populated.
  
  This call takes ownership of the `builder` pointer.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_RegisterKernelBuilder(const char* kernel_name,
                                                      TF_KernelBuilder* builder,
                                                      TF_Status* status);

=head2 TF_RegisterKernelBuilderWithKernelDef

=over 2

  Register the given kernel builder with the TensorFlow runtime. If
  registration fails, the given status will be populated.
  
  This method is the same as TF_RegisterKernelBuilder except it takes in a
  serialized KernelDef, and uses it for registration, instead of building a new
  one. Users can choose to not provide a serialized KernelDef and in that case
  it's identical to TF_RegisterKernelBuilder.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_RegisterKernelBuilderWithKernelDef(
      const char* serialized_kernel_def, const char* name,
      TF_KernelBuilder* builder, TF_Status* status);

=head2 TF_DeleteKernelBuilder

=over 2

  Deletes the given TF_KernelBuilder. This should be called only if the kernel
  builder is not registered with TensorFlow via TF_RegisterKernelBuilder.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_DeleteKernelBuilder(TF_KernelBuilder* builder);

=head2 TF_GetStream

=over 2

  TF_GetStream returns the SP_Stream available in ctx.
  This function returns a stream only for devices registered using the
  StreamExecutor C API
  (tensorflow/c/experimental/stream_executor/stream_executor.h). It will return
  nullptr and set error status in all other cases.
  Experimental: this function doesn't have compatibility guarantees and subject
  to change at any time.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern SP_Stream TF_GetStream(TF_OpKernelContext* ctx,
                                               TF_Status* status);

=head2 TF_NumInputs

=over 2

  TF_NumInputs returns the number of inputs available in ctx.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern int TF_NumInputs(TF_OpKernelContext* ctx);

=head2 TF_NumOutputs

=over 2

  TF_NumOutputs returns the number of outputs to be placed in *ctx by the
  kernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern int TF_NumOutputs(TF_OpKernelContext* ctx);

=head2 TF_GetInput

=over 2

  Retrieves the ith input from ctx. If TF_GetCode(status) is TF_OK, *tensor is
  populated and its ownership is passed to the caller. In any other case,
  *tensor is not modified.
  
  If i < 0 or i >= TF_NumInputs(ctx), *status is set to TF_OUT_OF_RANGE.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_GetInput(TF_OpKernelContext* ctx, int i,
                                         TF_Tensor** tensor, TF_Status* status);

=head2 TF_InputRange

=over 2

  Retrieves the start and stop indices, given the input name. Equivalent to
  OpKernel::InputRange(). `args` will contain the result indices and status.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_InputRange(TF_OpKernelContext* ctx,
                                           const char* name,
                                           TF_InputRange_Args* args);

=head2 TF_SetOutput

=over 2

  Sets the ith output of ctx to tensor. If TF_GetCode(status) is anything but
  TF_OK, ctx is left unmodified.
  
  If i < 0 or i >= TF_NumOutputs(ctx), *status is set to TF_OUT_OF_RANGE.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_SetOutput(TF_OpKernelContext* ctx, int i,
                                          const TF_Tensor* tensor,
                                          TF_Status* status);

=head2 TF_GetMutableOutput

=over 2

  Retrieves the ith output from ctx. If TF_GetCode(status) is TF_OK, *tensor is
  populated and its ownership is passed to the caller. In any other case,
  *tensor is not modified.
  
  If i < 0 or i >= TF_NumOutputs(ctx), *status is set to TF_OUT_OF_RANGE.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_GetMutableOutput(TF_OpKernelContext* ctx,
                                                       int i, TF_Status* status);

=head2 TF_GetSerializedFunctionDefLibrary

=over 2

  Retrieves a serialized FunctionDefLibrary. Status will be set.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_GetSerializedFunctionDefLibrary(
      TF_OpKernelContext* ctx, TF_Buffer* serialized_function_def_library,
      TF_Status* status);

=head2 TF_GetSerializedConfigProto

=over 2

  Retrieves a serialized ConfigProto. Status will be set.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_GetSerializedConfigProto(
      TF_OpKernelContext* ctx, TF_Buffer* serialized_config_proto,
      TF_Status* status);

=head2 TF_OpKernelConstruction_Failure

=over 2

  Notifies the given OpKernelConstruction that kernel construction has failed.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_Failure(
      TF_OpKernelConstruction* ctx, TF_Status* status);

=head2 TF_OpKernelContext_Failure

=over 2

  Notifies the given OpKernelContext that the kernel's compute function has
  failed.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelContext_Failure(TF_OpKernelContext* ctx,
                                                        TF_Status* status);

=head2 TF_ExpectedOutputDataType

=over 2

  Returns the expected output data type of the ith output. If i < 0 or
  i >= TF_NumOutputs(ctx), the program aborts.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_ExpectedOutputDataType(
      TF_OpKernelContext* ctx, int i);

=head2 TF_IsHostMemoryInput

=over 2

  Returns true if the ith input is allocated in host memory. If i < 0 or i >=
  TF_NumInputs(ctx), the program aborts.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern bool TF_IsHostMemoryInput(TF_OpKernelContext* ctx, int i,
                                                  TF_Status* status);

=head2 TF_IsHostMemoryOutput

=over 2

  Returns true if the ith output is allocated in host memory. If i < 0 or i >=
  TF_NumOutputs(ctx), the program aborts.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern bool TF_IsHostMemoryOutput(TF_OpKernelContext* ctx, int i,
                                                   TF_Status* status);

=head2 TF_StepId

=over 2

  Returns the step ID of the given context.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern int64_t TF_StepId(TF_OpKernelContext* ctx);

=head2 TF_OpKernelConstruction_GetNodeDef

=over 2

  Returns the serialized NodeDef protocol buffer for the kernel

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_OpKernelConstruction_GetNodeDef(
      TF_OpKernelConstruction* ctx, TF_Status* status);

=head2 TF_GetFrameId

=over 2

  Returns the frame ID of the given context.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern uint64_t TF_GetFrameId(TF_OpKernelContext* ctx);

=head2 TF_GetIterId

=over 2

  Returns the Iter ID of the given context.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern int64_t TF_GetIterId(TF_OpKernelContext* ctx);

=head2 TF_GetGraphDefVersion

=over 2

  Returns the graph def version of the given context.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern int TF_GetGraphDefVersion(TF_OpKernelContext* ctx);

=head2 TF_GetOpKernelName

=over 2

  Returns the name of the OpKernel.
  
  The returned TF_StringView's underlying string is owned by the OpKernel and
  has the same lifetime as the OpKernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_GetOpKernelName(TF_OpKernelContext* ctx);

=head2 TF_GetResourceMgrDefaultContainerName

=over 2

  Returns the default container of the resource manager in OpKernelContext.
  
  The returned TF_StringView's underlying string is owned by the OpKernel and
  has the same lifetime as the OpKernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_GetResourceMgrDefaultContainerName(
      TF_OpKernelContext* ctx);

=head2 TF_GetOpKernelRequestedInput

=over 2

  Returns the name of the requested input at `index` from the OpKernel.
  
  The returned TF_StringView's underlying string is owned by the OpKernel and
  has the same lifetime as the OpKernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_GetOpKernelRequestedInput(
      TF_OpKernelContext* ctx, size_t index);

=head2 TF_OpKernelConstruction_GetAttrSize

=over 2

  Get the list_size and total_size of the attribute `attr_name` of `oper`.
  list_size - the length of the list.
  total_size - total size of the list.
    (1) If attr_type == TF_ATTR_STRING
        then total_size is the cumulative byte size
        of all the strings in the list.
    (3) If attr_type == TF_ATTR_SHAPE
        then total_size is the number of dimensions
        of the shape valued attribute, or -1
        if its rank is unknown.
    (4) If attr_type == TF_ATTR_SHAPE
        then total_size is the cumulative number
        of dimensions of all shapes in the list.
    (5) Otherwise, total_size is undefined.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrSize(
      TF_OpKernelConstruction* ctx, const char* attr_name, int32_t* list_size,
      int32_t* total_size, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrType

=over 2

  Interprets the named kernel construction attribute as a TF_DataType and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  TF_DataType, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrType(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_DataType* val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrInt32

=over 2

  Interprets the named kernel construction attribute as int32_t and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  int32, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrInt32(
      TF_OpKernelConstruction* ctx, const char* attr_name, int32_t* val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrInt64

=over 2

  Interprets the named kernel construction attribute as int64_t and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  int64, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrInt64(
      TF_OpKernelConstruction* ctx, const char* attr_name, int64_t* val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrFloat

=over 2

  Interprets the named kernel construction attribute as float and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  float, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrFloat(
      TF_OpKernelConstruction* ctx, const char* attr_name, float* val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrBool

=over 2

  Interprets the named kernel construction attribute as bool and
  places it into *val. *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  bool, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrBool(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Bool* val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrString

=over 2

  Interprets the named kernel construction attribute as string and
  places it into *val. `val` must
  point to an array of length at least `max_length` (ideally set to
  total_size from TF_OpKernelConstruction_GetAttrSize(ctx,
  attr_name, list_size, total_size)). *status is set to TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  string, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrString(
      TF_OpKernelConstruction* ctx, const char* attr_name, char* val,
      size_t max_length, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrTensor

=over 2

  Interprets the named kernel construction attribute as tensor and places it
  into *val. Allocates a new TF_Tensor which the caller is expected to take
  ownership of (and can deallocate using TF_DeleteTensor). *status is set to
  TF_OK.
  
  If the attribute could not be found or could not be interpreted as
  tensor, *status is populated with an error.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrTensor(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Tensor** val,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrTypeList

=over 2

  Interprets the named kernel construction attribute as a TF_DataType array and
  places it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values` (ideally set
  to list_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size)).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrTypeList(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_DataType* vals,
      int max_vals, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrInt32List

=over 2

  Interprets the named kernel construction attribute as int32_t array and
  places it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values` (ideally set
  to list_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size)).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrInt32List(
      TF_OpKernelConstruction* ctx, const char* attr_name, int32_t* vals,
      int max_vals, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrInt64List

=over 2

  Interprets the named kernel construction attribute as int64_t array and
  places it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values` (ideally set
  to list_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size)).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrInt64List(
      TF_OpKernelConstruction* ctx, const char* attr_name, int64_t* vals,
      int max_vals, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrFloatList

=over 2

  Interprets the named kernel construction attribute as float array and
  places it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values` (ideally set
  to list_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size)).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrFloatList(
      TF_OpKernelConstruction* ctx, const char* attr_name, float* vals,
      int max_vals, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrBoolList

=over 2

  Interprets the named kernel construction attribute as bool array and
  places it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values` (ideally set
  to list_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size)).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrBoolList(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Bool* vals,
      int max_vals, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrStringList

=over 2

  Interprets the named kernel construction attribute as string array and fills
  in `vals` and `lengths`, each of which must point to an array of length at
  least `max_values`. *status is set to TF_OK. The elements of values will
  point to addresses in `storage` which must be at least `storage_size` bytes
  in length. Ideally, max_values would be set to list_size and `storage` would
  be at least total_size, obtained from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, list_size,
  total_size).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrStringList(
      TF_OpKernelConstruction* ctx, const char* attr_name, char** vals,
      size_t* lengths, int max_values, void* storage, size_t storage_size,
      TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrTensorList

=over 2

  Interprets the named kernel construction attribute as tensor array and places
  it into *vals. *status is set to TF_OK.
  `vals` must point to an array of length at least `max_values`
  (ideally set to list_size from TF_OpKernelConstruction_GetAttrSize(ctx,
  attr_name, list_size, total_size)).
  
  The caller takes ownership of all the non-null TF_Tensor* entries in `vals`
  (which can be deleted using TF_DeleteTensor(vals[i])).

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrTensorList(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Tensor** vals,
      int max_values, TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrFunction

=over 2

  Interprets the named kernel construction attribute as a
  tensorflow::NameAttrList and returns the serialized proto as TF_Buffer.
  `status` will be set. The caller takes ownership of the returned TF_Buffer
  (if not null) and is responsible for managing its lifetime.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_OpKernelConstruction_GetAttrFunction(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Status* status);

=head2 TF_OpKernelConstruction_HasAttr

=over 2

  Return true if the kernel construction has the attr_name

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern bool TF_OpKernelConstruction_HasAttr(
      TF_OpKernelConstruction* ctx, const char* attr_name, TF_Status* status);

=head2 TF_OpKernelConstruction_GetName

=over 2

  Returns the unique operation name for this OpKernel.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_StringView TF_OpKernelConstruction_GetName(
      TF_OpKernelConstruction* ctx);

=head2 TF_AllocateOutput

=over 2

  Allocates Tensor for output at given index. Caller takes ownership of
  returned TF_Tensor and should deallocate it using TF_DeleteTensor(tensor).
  
  This function should be used to allocate outputs inside kernel
  compute function.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT TF_Tensor* TF_AllocateOutput(TF_OpKernelContext* context,
                                              int index, TF_DataType dtype,
                                              const int64_t* dims, int num_dims,
                                              size_t len, TF_Status* status);

=head2 TF_ForwardInputOrAllocateOutput

=over 2

  Tries to forward one of the inputs given in input_indices to
  output[output_index]. If none of the given inputs can be forwarded, calls
  allocate_output() to allocate a new output buffer. The index of the
  forwarded input will be assign to output argument forwarded_input (if it's
  not nullptr). If no inputs are forwarded, forwarded_input will be assigned
  -1.

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT TF_Tensor* TF_ForwardInputOrAllocateOutput(
      TF_OpKernelContext* context, const int* candidate_input_indices,
      int num_candidate_input_indices, int output_index,
      const int64_t* output_dims, int output_num_dims, int* forwarded_input,
      TF_Status* status);

=head2 TF_AllocateTemp

=over 2

  Allocates a temporary Tensor of the specified type and shape. The
  Tensor must not be used after kernel construction is
  complete.
  
  num_dims must equal the size of array dims

=back

  /* From <tensorflow/c/kernels.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_AllocateTemp(
      TF_OpKernelContext* context, TF_DataType dtype, const int64_t* dims,
      int num_dims, TF_AllocatorAttributes* alloc_attrs, TF_Status* status);

=head2 TF_AssignVariable

=over 2

  Expose higher level Assignment operation for Pluggable vendors to implement
  in the plugin for Training. The API takes in the context with indices for
  the input and value tensors. It also accepts the copy callback provided by
  pluggable vendor to do the copying of the tensors. The caller takes ownership
  of the `source` and `dest` tensors and is responsible for freeing them with
  TF_DeleteTensor. This function will return an error when the following
  conditions are met:
    1. `validate_shape` is set to `true`
    2. The variable is initialized
    3. The shape of the value tensor doesn't match the shape of the variable
       tensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AssignVariable(
      TF_OpKernelContext* ctx, int input_index, int value_index,
      bool validate_shape,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_Status* status);

=head2 TF_AssignRefVariable

=over 2

  Expose higher level Assignment operation for Pluggable vendors to implement
  in the plugin for Training on ref variables. The API takes in the context
  with indices for the input and value tensors. It also accepts the copy
  callback provided by pluggable vendor to do the copying of the tensors. The
  caller takes ownership of the `source` and `dest` tensors and is responsible
  for freeing them with TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AssignRefVariable(
      TF_OpKernelContext* ctx, int input_ref_index, int output_ref_index,
      int value_index, bool use_locking, bool validate_shape,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_Status* status);

=head2 TF_AssignUpdateVariable

=over 2

  Expose higher level AssignUpdate operation for Pluggable vendors to implement
  in the plugin for Training. The API takes in the context with indices for the
  input and value tensors. It also accepts the copy callback provided by
  pluggable vendor to do the copying of the tensors and the update callback to
  apply the arithmetic operation. The caller takes ownership of the `source`,
  `dest`, `tensor` and `value` tensors and is responsible for freeing them with
  TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AssignUpdateVariable(
      TF_OpKernelContext* ctx, int input_index, int value_index, int Op,
      int isVariantType,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      void (*updateFunc)(TF_OpKernelContext* ctx, TF_Tensor* tensor,
                         TF_Tensor* value, int Op),
      TF_Status* status);

=head2 TF_MaybeLockVariableInputMutexesInOrder

=over 2

  This is a helper function which acquires mutexes in-order to provide
  thread-safe way of performing weights update during the optimizer op. It
  returns an opaque LockHolder handle back to plugin. This handle is passed to
  the Release API for releasing the locks when the weight update is done. The
  caller takes ownership of the `source` and `dest` tensors and is responsible
  for freeing them with TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_MaybeLockVariableInputMutexesInOrder(
      TF_OpKernelContext* ctx, bool do_lock, bool sparse, const int* const inputs,
      size_t len,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_VariableInputLockHolder** lockHolder, TF_Status* status);

=head2 TF_GetInputTensorFromVariable

=over 2

  This interface returns `out` tensor which is updated corresponding to the
  variable passed with input index. The caller takes ownership of the `source`
  and `dest` tensors and is responsible for freeing them with TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_GetInputTensorFromVariable(
      TF_OpKernelContext* ctx, int input, bool lock_held, bool isVariantType,
      bool sparse,
      void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
                       TF_Tensor* dest),
      TF_Tensor** out, TF_Status* status);

=head2 TF_OpKernelContext_ForwardRefInputToRefOutput

=over 2

  This interface forwards the reference from input to the output tensors
  corresponding to the indices provided with `input_index` and `output_index`

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelContext_ForwardRefInputToRefOutput(
      TF_OpKernelContext* ctx, int32_t input_index, int32_t output_index);

=head2 TF_ReleaseVariableInputLockHolder

=over 2

  The API releases the opaque lock handle returned with
  `TF_MaybeLockVariableInputMutexesInOrder` API

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ReleaseVariableInputLockHolder(
      TF_VariableInputLockHolder* lockHolder);

=head2 TF_GetInputByName

=over 2

  Allows plugin to get TF_Tensor when passed its input_name

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_GetInputByName(TF_OpKernelContext* ctx,
                                               const char* inputName,
                                               TF_Tensor** tensor,
                                               TF_Status* status);

=head2 TF_OpKernelConstruction_GetAttrTensorShape

=over 2

  Interprets the named kernel construction attribute as a shape attribute and
  fills in `vals` with the size of each dimension. `vals` must point to an
  array of length at least `max_values` (ideally set to total_size from
  TF_OpKernelConstruction_GetAttrSize(ctx, attr_name, &list_size,
  &total_size)).

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_OpKernelConstruction_GetAttrTensorShape(
      TF_OpKernelConstruction* ctx, const char* attr_name, int64_t* dims,
      size_t num_dims, TF_Status* status);

=head2 TF_IsRefInput

=over 2

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern bool TF_IsRefInput(TF_OpKernelContext* ctx, int i,
                                           TF_Status* status);

=head2 TF_AddNVariant

=over 2

  Expose higher level AddN operation for Pluggable vendors to implement
  in the plugin for Variant data types. The API takes in the context and a
  callback provided by pluggable vendor to do a Binary Add operation on the
  tensors unwrapped from the Variant tensors. The caller takes ownership of the
  `a`, `b` and `out` tensors and is responsible for freeing them with
  TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AddNVariant(
      TF_OpKernelContext* ctx,
      void (*binary_add_func)(TF_OpKernelContext* ctx, TF_Tensor* a, TF_Tensor* b,
                              TF_Tensor* out),
      TF_Status* status);

=head2 TF_ZerosLikeVariant

=over 2

  Expose higher level ZerosLike operation for Pluggable vendors to implement
  in the plugin for Variant data types. The API takes in the context and a
  callback provided by pluggable vendor to do a ZerosLike operation on the
  tensors unwrapped from the Variant tensors. The caller takes ownership of the
  `input` and `out` tensors and is responsible for freeing them with
  TF_DeleteTensor.

=back

  /* From <tensorflow/c/kernels_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ZerosLikeVariant(
      TF_OpKernelContext* ctx,
      void (*zeros_like_func)(TF_OpKernelContext* ctx, TF_Tensor* input,
                              TF_Tensor* out),
      TF_Status* status);

=head2 TFE_NewContextOptions

=over 2

  Return a new options object.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_ContextOptions* TFE_NewContextOptions(void);

=head2 TFE_ContextOptionsSetConfig

=over 2

  Set the config in TF_ContextOptions.options.
  config should be a serialized tensorflow.ConfigProto proto.
  If config was not parsed successfully as a ConfigProto, record the
  error information in *status.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextOptionsSetConfig(
      TFE_ContextOptions* options, const void* proto, size_t proto_len,
      TF_Status* status);

=head2 TFE_ContextOptionsSetAsync

=over 2

  Sets the default execution mode (sync/async). Note that this can be
  overridden per thread using TFE_ContextSetExecutorForThread.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextOptionsSetAsync(TFE_ContextOptions*,
                                                        unsigned char enable);

=head2 TFE_ContextOptionsSetDevicePlacementPolicy

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextOptionsSetDevicePlacementPolicy(
      TFE_ContextOptions*, TFE_ContextDevicePlacementPolicy);

=head2 TFE_DeleteContextOptions

=over 2

  Destroy an options object.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteContextOptions(TFE_ContextOptions*);

=head2 TFE_NewContext

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_Context* TFE_NewContext(
      const TFE_ContextOptions* opts, TF_Status* status);

=head2 TFE_DeleteContext

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteContext(TFE_Context* ctx);

=head2 TFE_ContextListDevices

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_DeviceList* TFE_ContextListDevices(TFE_Context* ctx,
                                                              TF_Status* status);

=head2 TFE_ContextClearCaches

=over 2

  Clears the internal caches in the TFE context. Useful when reseeding random
  ops.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextClearCaches(TFE_Context* ctx);

=head2 TFE_ContextSetThreadLocalDevicePlacementPolicy

=over 2

  Sets a thread-local device placement policy. After this call, other calls to
  TFE_Execute in the same thread will use the device policy specified here
  instead of the device policy used to construct the context. This has no
  effect on the device policy used by other program threads.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextSetThreadLocalDevicePlacementPolicy(
      TFE_Context* ctx, TFE_ContextDevicePlacementPolicy policy);

=head2 TFE_ContextGetDevicePlacementPolicy

=over 2

  Returns the device placement policy to be used by this context in the current
  thread.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_ContextDevicePlacementPolicy
  TFE_ContextGetDevicePlacementPolicy(TFE_Context* ctx);

=head2 TFE_ContextSetServerDef

=over 2

  A tensorflow.ServerDef specifies remote workers (in addition to the current
  workers name). Operations created in this context can then be executed on
  any of these remote workers by setting an appropriate device.
  
  If the following is set, all servers identified by the
  ServerDef must be up when the context is created.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextSetServerDef(TFE_Context* ctx,
                                                     int keep_alive_secs,
                                                     const void* proto,
                                                     size_t proto_len,
                                                     TF_Status* status);

=head2 TFE_NewTensorHandle

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandle(const TF_Tensor* t,
                                                              TF_Status* status);

=head2 TFE_DeleteTensorHandle

=over 2

  Indicates that the caller will not be using `h` any more.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteTensorHandle(TFE_TensorHandle* h);

=head2 TFE_TensorHandleDataType

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_DataType TFE_TensorHandleDataType(TFE_TensorHandle* h);

=head2 TFE_TensorHandleNumDims

=over 2

  This function will block till the operation that produces `h` has completed.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_TensorHandleNumDims(TFE_TensorHandle* h,
                                                    TF_Status* status);

=head2 TFE_TensorHandleNumElements

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int64_t TFE_TensorHandleNumElements(TFE_TensorHandle* h,
                                                            TF_Status* status);

=head2 TFE_TensorHandleDim

=over 2

  This function will block till the operation that produces `h` has completed.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int64_t TFE_TensorHandleDim(TFE_TensorHandle* h,
                                                    int dim_index,
                                                    TF_Status* status);

=head2 TFE_TensorHandleDeviceName

=over 2

  Returns the device of the operation that produced `h`. If `h` was produced by
  a copy, returns the destination device of the copy. Note that the returned
  device name is not always the device holding the tensor handle's memory. If
  you want the latter, use TFE_TensorHandleBackingDeviceName. This function
  will block till the operation that produces `h` has completed.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_TensorHandleDeviceName(
      TFE_TensorHandle* h, TF_Status* status);

=head2 TFE_TensorHandleBackingDeviceName

=over 2

  Returns the name of the device in whose memory `h` resides.
  
  This function will block till the operation that produces `h` has completed.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_TensorHandleBackingDeviceName(
      TFE_TensorHandle* h, TF_Status* status);

=head2 TFE_TensorHandleCopySharingTensor

=over 2

  Return a pointer to a new TFE_TensorHandle that shares the underlying tensor
  with `h`. On success, `status` is set to OK. On failure, `status` reflects
  the error and a nullptr is returned.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopySharingTensor(
      TFE_TensorHandle* h, TF_Status* status);

=head2 TFE_TensorHandleResolve

=over 2

  This function will block till the operation that produces `h` has
  completed. The memory returned might alias the internal memory used by
  TensorFlow. Hence, callers should not mutate this memory (for example by
  modifying the memory region pointed to by TF_TensorData() on the returned
  TF_Tensor).

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TFE_TensorHandleResolve(TFE_TensorHandle* h,
                                                           TF_Status* status);

=head2 TFE_TensorHandleCopyToDevice

=over 2

  Create a new TFE_TensorHandle with the same contents as 'h' but placed
  in the memory of the device name 'device_name'.
  If source and destination are the same device, then this creates a new handle
  that shares the underlying buffer. Otherwise, it currently requires at least
  one of the source or destination devices to be CPU (i.e., for the source or
  destination tensor to be placed in host memory).
  If async execution is enabled, the copy may be enqueued and the call will
  return "non-ready" handle. Else, this function returns after the copy has
  been done.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopyToDevice(
      TFE_TensorHandle* h, TFE_Context* ctx, const char* device_name,
      TF_Status* status);

=head2 TFE_TensorHandleTensorDebugInfo

=over 2

  Retrieves TFE_TensorDebugInfo for `handle`.
  If TFE_TensorHandleTensorDebugInfo succeeds, `status` is set to OK and caller
  is responsible for deleting returned TFE_TensorDebugInfo.
  If TFE_TensorHandleTensorDebugInfo fails, `status` is set to appropriate
  error and nullptr is returned. This function can block till the operation
  that produces `handle` has completed.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorDebugInfo* TFE_TensorHandleTensorDebugInfo(
      TFE_TensorHandle* h, TF_Status* status);

=head2 TFE_DeleteTensorDebugInfo

=over 2

  Deletes `debug_info`.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteTensorDebugInfo(
      TFE_TensorDebugInfo* debug_info);

=head2 TFE_TensorDebugInfoOnDeviceNumDims

=over 2

  Returns the number of dimensions used to represent the tensor on its device.
  The number of dimensions used to represent the tensor on device can be
  different from the number returned by TFE_TensorHandleNumDims.
  The return value was current at the time of TFE_TensorDebugInfo creation.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_TensorDebugInfoOnDeviceNumDims(
      TFE_TensorDebugInfo* debug_info);

=head2 TFE_TensorDebugInfoOnDeviceDim

=over 2

  Returns the number of elements in dimension `dim_index`.
  Tensor representation on device can be transposed from its representation
  on host. The data contained in dimension `dim_index` on device
  can correspond to the data contained in another dimension in on-host
  representation. The dimensions are indexed using the standard TensorFlow
  major-to-minor order (slowest varying dimension first),
  not the XLA's minor-to-major order.
  On-device dimensions can be padded. TFE_TensorDebugInfoOnDeviceDim returns
  the number of elements in a dimension after padding.
  The return value was current at the time of TFE_TensorDebugInfo creation.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int64_t TFE_TensorDebugInfoOnDeviceDim(
      TFE_TensorDebugInfo* debug_info, int dim_index);

=head2 TFE_NewOp

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_Op* TFE_NewOp(TFE_Context* ctx,
                                          const char* op_or_function_name,
                                          TF_Status* status);

=head2 TFE_DeleteOp

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteOp(TFE_Op* op);

=head2 TFE_OpGetName

=over 2

  Returns the op or function name `op` will execute.
  
  The returned string remains valid throughout the lifetime of 'op'.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_OpGetName(const TFE_Op* op,
                                                  TF_Status* status);

=head2 TFE_OpGetContext

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_Context* TFE_OpGetContext(const TFE_Op* op,
                                                      TF_Status* status);

=head2 TFE_OpSetDevice

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetDevice(TFE_Op* op, const char* device_name,
                                             TF_Status* status);

=head2 TFE_OpGetDevice

=over 2

  The returned string remains valid throughout the lifetime of 'op'.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern const char* TFE_OpGetDevice(const TFE_Op* op,
                                                    TF_Status* status);

=head2 TFE_OpAddInput

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpAddInput(TFE_Op* op, TFE_TensorHandle* input,
                                            TF_Status* status);

=head2 TFE_OpAddInputList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpAddInputList(TFE_Op* op,
                                                TFE_TensorHandle** inputs,
                                                int num_inputs,
                                                TF_Status* status);

=head2 TFE_OpGetFlatInputCount

=over 2

  Fetches the current number of inputs attached to `op`.
  
  Does not use the operation's definition to determine how many inputs should
  be attached. It is intended for use with TFE_OpGetFlatInput to inspect an
  already-finalized operation.
  
  Note that TFE_OpGetFlatInputCount and TFE_OpGetFlatInput operate on a flat
  sequence of inputs, unlike TFE_OpGetInputLength (for getting the length of a
  particular named input list, which may only be part of the op's inputs).

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_OpGetFlatInputCount(const TFE_Op* op,
                                                    TF_Status* status);

=head2 TFE_OpGetFlatInput

=over 2

  Returns a borrowed reference to one of `op`'s inputs. Use
  `TFE_TensorHandleCopySharingTensor` to make a new reference.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_OpGetFlatInput(const TFE_Op* op,
                                                             int index,
                                                             TF_Status* status);

=head2 TFE_OpGetAttrType

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_AttrType TFE_OpGetAttrType(TFE_Op* op,
                                                      const char* attr_name,
                                                      unsigned char* is_list,
                                                      TF_Status* status);

=head2 TFE_OpNameGetAttrType

=over 2

  Get an attribute type given an op name; a fusion of TFE_NewOp and
  TFE_OpGetAttrType for use from Python without the overhead of the individual
  calls and memory management of TFE_Op.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern TF_AttrType TFE_OpNameGetAttrType(
      TFE_Context* ctx, const char* op_or_function_name, const char* attr_name,
      unsigned char* is_list, TF_Status* status);

=head2 TFE_OpSetAttrString

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrString(TFE_Op* op,
                                                 const char* attr_name,
                                                 const void* value,
                                                 size_t length);

=head2 TFE_OpSetAttrInt

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrInt(TFE_Op* op, const char* attr_name,
                                              int64_t value);

=head2 TFE_OpSetAttrFloat

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrFloat(TFE_Op* op, const char* attr_name,
                                                float value);

=head2 TFE_OpSetAttrBool

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrBool(TFE_Op* op, const char* attr_name,
                                               unsigned char value);

=head2 TFE_OpSetAttrType

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrType(TFE_Op* op, const char* attr_name,
                                               TF_DataType value);

=head2 TFE_OpSetAttrShape

=over 2

  If the number of dimensions is unknown, `num_dims` must be set to
  -1 and `dims` can be null.  If a dimension is unknown, the
  corresponding entry in the `dims` array must be -1.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrShape(TFE_Op* op, const char* attr_name,
                                                const int64_t* dims,
                                                const int num_dims,
                                                TF_Status* out_status);

=head2 TFE_OpSetAttrFunction

=over 2

  Sets the attribute attr_name to be a function specified by 'function'.
  
  TODO(ashankar,iga): Add this functionality to the C API for graph
  construction. Perhaps we want an AttrValueMap equivalent in the C API?

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrFunction(TFE_Op* op,
                                                   const char* attr_name,
                                                   const TFE_Op* value);

=head2 TFE_OpSetAttrFunctionName

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT void TFE_OpSetAttrFunctionName(TFE_Op* op, const char* attr_name,
                                                const char* data, size_t length);

=head2 TFE_OpSetAttrTensor

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrTensor(TFE_Op* op,
                                                 const char* attr_name,
                                                 TF_Tensor* tensor,
                                                 TF_Status* status);

=head2 TFE_OpSetAttrStringList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrStringList(TFE_Op* op,
                                                     const char* attr_name,
                                                     const void* const* values,
                                                     const size_t* lengths,
                                                     int num_values);

=head2 TFE_OpSetAttrIntList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrIntList(TFE_Op* op,
                                                  const char* attr_name,
                                                  const int64_t* values,
                                                  int num_values);

=head2 TFE_OpSetAttrFloatList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrFloatList(TFE_Op* op,
                                                    const char* attr_name,
                                                    const float* values,
                                                    int num_values);

=head2 TFE_OpSetAttrBoolList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrBoolList(TFE_Op* op,
                                                   const char* attr_name,
                                                   const unsigned char* values,
                                                   int num_values);

=head2 TFE_OpSetAttrTypeList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrTypeList(TFE_Op* op,
                                                   const char* attr_name,
                                                   const TF_DataType* values,
                                                   int num_values);

=head2 TFE_OpSetAttrShapeList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrShapeList(
      TFE_Op* op, const char* attr_name, const int64_t** dims,
      const int* num_dims, int num_values, TF_Status* out_status);

=head2 TFE_OpSetAttrFunctionList

=over 2

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrFunctionList(TFE_Op* op,
                                                       const char* attr_name,
                                                       const TFE_Op** value,
                                                       int num_values);

=head2 TFE_OpGetInputLength

=over 2

  Returns the length (number of tensors) of the input argument `input_name`
  found in the provided `op`.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_OpGetInputLength(TFE_Op* op,
                                                 const char* input_name,
                                                 TF_Status* status);

=head2 TFE_OpGetOutputLength

=over 2

  Returns the length (number of tensors) of the output argument `output_name`
  found in the provided `op`.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern int TFE_OpGetOutputLength(TFE_Op* op,
                                                  const char* output_name,
                                                  TF_Status* status);

=head2 TFE_Execute

=over 2

  Execute the operation defined by 'op' and return handles to computed
  tensors in `retvals`.
  
  'retvals' must point to a pre-allocated array of TFE_TensorHandle* and
  '*num_retvals' should be set to the size of this array. It is an error if
  the size of 'retvals' is less than the number of outputs. This call sets
  *num_retvals to the number of outputs.
  
  If async execution is enabled, the call may simply enqueue the execution
  and return "non-ready" handles in `retvals`. Note that any handles contained
  in 'op' should not be mutated till the kernel execution actually finishes.
  
  For sync execution, if any of the inputs to `op` are not ready, this call
  will block till they become ready and then return when the kernel execution
  is done.
  TODO(agarwal): change num_retvals to int from int*.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_Execute(TFE_Op* op, TFE_TensorHandle** retvals,
                                         int* num_retvals, TF_Status* status);

=head2 TFE_ContextAddFunctionDef

=over 2

  Add a function (serialized FunctionDef protocol buffer) to ctx so
  that it can be invoked using TFE_Execute.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextAddFunctionDef(
      TFE_Context* ctx, const char* serialized_function_def, size_t size,
      TF_Status* status);

=head2 TFE_ContextAddFunction

=over 2

  Adds a function (created from TF_GraphToFunction or
  TF_FunctionImportFunctionDef) to the context, allowing it to be executed with
  TFE_Execute by creating an op with the same name as the function.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextAddFunction(TFE_Context* ctx,
                                                    TF_Function* function,
                                                    TF_Status* status);

=head2 TFE_ContextRemoveFunction

=over 2

  Removes a function from the context. Once removed, you can no longer
  TFE_Execute it or TFE_Execute any TFE_Op which has it as an attribute or any
  other function which calls it as an attribute.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextRemoveFunction(TFE_Context* ctx,
                                                       const char* name,
                                                       TF_Status* status);

=head2 TFE_ContextHasFunction

=over 2

  Checks whether a function is registered under `name`.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT unsigned char TFE_ContextHasFunction(TFE_Context* ctx,
                                                      const char* name);

=head2 TFE_ContextEnableRunMetadata

=over 2

  Enables tracing of RunMetadata on the ops executed from this context.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextEnableRunMetadata(TFE_Context* ctx);

=head2 TFE_ContextDisableRunMetadata

=over 2

  Disables tracing of RunMetadata on the ops executed from this context.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextDisableRunMetadata(TFE_Context* ctx);

=head2 TFE_ContextExportRunMetadata

=over 2

  Populates the passed-in buffer with a serialized RunMetadata protocol buffer
  containing any run metadata information accumulated so far and clears this
  information.
  If async mode is enabled, this call blocks till all currently pending ops are
  done.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextExportRunMetadata(TFE_Context* ctx,
                                                          TF_Buffer* buf,
                                                          TF_Status* status);

=head2 TFE_ContextStartStep

=over 2

  Some TF ops need a step container to be set to limit the lifetime of some
  resources (mostly TensorArray and Stack, used in while loop gradients in
  graph mode). Calling this on a context tells it to start a step.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextStartStep(TFE_Context* ctx);

=head2 TFE_ContextEndStep

=over 2

  Ends a step. When there is no active step (that is, every started step has
  been ended) step containers will be cleared. Note: it is not safe to call
  TFE_ContextEndStep while ops that rely on the step container may be running.

=back

  /* From <tensorflow/c/eager/c_api.h> */
  TF_CAPI_EXPORT extern void TFE_ContextEndStep(TFE_Context* ctx);

=head2 TFE_HandleToDLPack

=over 2

  Converts eager tensor handle to DLPack (DLManagedTensor*), and return the
  void* for further PyCapsule construction.

=back

  /* From <tensorflow/c/eager/dlpack.h> */
  TF_CAPI_EXPORT extern void* TFE_HandleToDLPack(TFE_TensorHandle* h,
                                                 TF_Status* status);

=head2 TFE_HandleFromDLPack

=over 2

  Converts DLPack (DLManagedTensor*) to eager tensor handle.

=back

  /* From <tensorflow/c/eager/dlpack.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_HandleFromDLPack(void* dlm,
                                                               TF_Status* status,
                                                               TFE_Context* ctx);

=head2 TFE_CallDLManagedTensorDeleter

=over 2

  Calls the destructor of DLManagedTensor, used in the destructor of PyCapsule.

=back

  /* From <tensorflow/c/eager/dlpack.h> */
  TF_CAPI_EXPORT extern void TFE_CallDLManagedTensorDeleter(void* dlm_ptr);

=head2 TFE_OpReset

=over 2

  Resets `op_to_reset` with `op_or_function_name` and `raw_device_name`. This
  is for performance optimization by reusing an exiting unused op rather than
  creating a new op every time. If `raw_device_name` is `NULL` or empty, it
  does not set the device name. If it's not `NULL`, then it attempts to parse
  and set the device name. It's effectively `TFE_OpSetDevice`, but it is faster
  than separately calling it because if the existing op has the same
  `raw_device_name`, it skips parsing and just leave as it is.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpReset(TFE_Op* op_to_reset,
                                         const char* op_or_function_name,
                                         const char* raw_device_name,
                                         TF_Status* status);

=head2 TFE_ContextEnableGraphCollection

=over 2

  Enables only graph collection in RunMetadata on the functions executed from
  this context.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextEnableGraphCollection(TFE_Context* ctx);

=head2 TFE_ContextDisableGraphCollection

=over 2

  Disables only graph collection in RunMetadata on the functions executed from
  this context.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextDisableGraphCollection(TFE_Context* ctx);

=head2 TFE_MonitoringCounterCellIncrementBy

=over 2

  Atomically increments the value of the cell. The value must be non-negative.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringCounterCellIncrementBy(
      TFE_MonitoringCounterCell* cell, int64_t value);

=head2 TFE_MonitoringCounterCellValue

=over 2

  Retrieves the current value of the cell.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int64_t TFE_MonitoringCounterCellValue(
      TFE_MonitoringCounterCell* cell);

=head2 TFE_MonitoringNewCounter0

=over 2

  Returns a new Counter metric object. The caller should manage lifetime of
  the object. Using duplicate metric name will crash the program with fatal
  error.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounter0* TFE_MonitoringNewCounter0(
      const char* name, TF_Status* status, const char* description);

=head2 TFE_MonitoringDeleteCounter0

=over 2

  Deletes the Counter object.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteCounter0(
      TFE_MonitoringCounter0* counter);

=head2 TFE_MonitoringGetCellCounter0

=over 2

  Retrieves the cell from the Counter object. The Counter object will manage
  lifetime of the cell.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounterCell* TFE_MonitoringGetCellCounter0(
      TFE_MonitoringCounter0* counter);

=head2 TFE_MonitoringNewCounter1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounter1* TFE_MonitoringNewCounter1(
      const char* name, TF_Status* status, const char* description,
      const char* label1);

=head2 TFE_MonitoringDeleteCounter1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteCounter1(
      TFE_MonitoringCounter1* counter);

=head2 TFE_MonitoringGetCellCounter1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounterCell* TFE_MonitoringGetCellCounter1(
      TFE_MonitoringCounter1* counter, const char* label1);

=head2 TFE_MonitoringNewCounter2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounter2* TFE_MonitoringNewCounter2(
      const char* name, TF_Status* status, const char* description,
      const char* label1, const char* label2);

=head2 TFE_MonitoringDeleteCounter2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteCounter2(
      TFE_MonitoringCounter2* counter);

=head2 TFE_MonitoringGetCellCounter2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringCounterCell* TFE_MonitoringGetCellCounter2(
      TFE_MonitoringCounter2* counter, const char* label1, const char* label2);

=head2 TFE_MonitoringIntGaugeCellSet

=over 2

  Atomically set the value of the cell.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringIntGaugeCellSet(
      TFE_MonitoringIntGaugeCell* cell, int64_t value);

=head2 TFE_MonitoringIntGaugeCellValue

=over 2

  Retrieves the current value of the cell.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int64_t TFE_MonitoringIntGaugeCellValue(
      TFE_MonitoringIntGaugeCell* cell);

=head2 TFE_MonitoringNewIntGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGauge0* TFE_MonitoringNewIntGauge0(
      const char* name, TF_Status* out_status, const char* description);

=head2 TFE_MonitoringDeleteIntGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteIntGauge0(
      TFE_MonitoringIntGauge0* gauge);

=head2 TFE_MonitoringGetCellIntGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGaugeCell*
  TFE_MonitoringGetCellIntGauge0(TFE_MonitoringIntGauge0* gauge);

=head2 TFE_MonitoringNewIntGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGauge1* TFE_MonitoringNewIntGauge1(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1);

=head2 TFE_MonitoringDeleteIntGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteIntGauge1(
      TFE_MonitoringIntGauge1* gauge);

=head2 TFE_MonitoringGetCellIntGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGaugeCell*
  TFE_MonitoringGetCellIntGauge1(TFE_MonitoringIntGauge1* gauge,
                                 const char* label1);

=head2 TFE_MonitoringNewIntGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGauge2* TFE_MonitoringNewIntGauge2(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1, const char* label2);

=head2 TFE_MonitoringDeleteIntGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteIntGauge2(
      TFE_MonitoringIntGauge2* gauge);

=head2 TFE_MonitoringGetCellIntGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringIntGaugeCell*
  TFE_MonitoringGetCellIntGauge2(TFE_MonitoringIntGauge2* gauge,
                                 const char* label1, const char* label2);

=head2 TFE_MonitoringStringGaugeCellSet

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringStringGaugeCellSet(
      TFE_MonitoringStringGaugeCell* cell, const char* value);

=head2 TFE_MonitoringStringGaugeCellValue

=over 2

  Retrieves the string value and saves it in the buffer.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const void TFE_MonitoringStringGaugeCellValue(
      TFE_MonitoringStringGaugeCell* cell, TF_Buffer* buf);

=head2 TFE_MonitoringNewStringGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGauge0* TFE_MonitoringNewStringGauge0(
      const char* name, TF_Status* out_status, const char* description);

=head2 TFE_MonitoringDeleteStringGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteStringGauge0(
      TFE_MonitoringStringGauge0* gauge);

=head2 TFE_MonitoringGetCellStringGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGaugeCell*
  TFE_MonitoringGetCellStringGauge0(TFE_MonitoringStringGauge0* gauge);

=head2 TFE_MonitoringNewStringGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGauge1* TFE_MonitoringNewStringGauge1(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1);

=head2 TFE_MonitoringDeleteStringGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteStringGauge1(
      TFE_MonitoringStringGauge1* gauge);

=head2 TFE_MonitoringGetCellStringGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGaugeCell*
  TFE_MonitoringGetCellStringGauge1(TFE_MonitoringStringGauge1* gauge,
                                    const char* label1);

=head2 TFE_MonitoringNewStringGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGauge2* TFE_MonitoringNewStringGauge2(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1, const char* label2);

=head2 TFE_MonitoringDeleteStringGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteStringGauge2(
      TFE_MonitoringStringGauge2* gauge);

=head2 TFE_MonitoringGetCellStringGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGaugeCell*
  TFE_MonitoringGetCellStringGauge2(TFE_MonitoringStringGauge2* gauge,
                                    const char* label1, const char* label2);

=head2 TFE_MonitoringNewStringGauge3

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGauge3* TFE_MonitoringNewStringGauge3(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1, const char* label2, const char* label3);

=head2 TFE_MonitoringDeleteStringGauge3

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteStringGauge3(
      TFE_MonitoringStringGauge3* gauge);

=head2 TFE_MonitoringGetCellStringGauge3

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGaugeCell*
  TFE_MonitoringGetCellStringGauge3(TFE_MonitoringStringGauge3* gauge,
                                    const char* label1, const char* label2,
                                    const char* label3);

=head2 TFE_MonitoringNewStringGauge4

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGauge4* TFE_MonitoringNewStringGauge4(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1, const char* label2, const char* label3,
      const char* label4);

=head2 TFE_MonitoringDeleteStringGauge4

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteStringGauge4(
      TFE_MonitoringStringGauge4* gauge);

=head2 TFE_MonitoringGetCellStringGauge4

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringStringGaugeCell*
  TFE_MonitoringGetCellStringGauge4(TFE_MonitoringStringGauge4* gauge,
                                    const char* label1, const char* label2,
                                    const char* label3, const char* label4);

=head2 TFE_MonitoringBoolGaugeCellSet

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringBoolGaugeCellSet(
      TFE_MonitoringBoolGaugeCell* cell, bool value);

=head2 TFE_MonitoringBoolGaugeCellValue

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_MonitoringBoolGaugeCellValue(
      TFE_MonitoringBoolGaugeCell* cell);

=head2 TFE_MonitoringNewBoolGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGauge0* TFE_MonitoringNewBoolGauge0(
      const char* name, TF_Status* out_status, const char* description);

=head2 TFE_MonitoringDeleteBoolGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteBoolGauge0(
      TFE_MonitoringBoolGauge0* gauge);

=head2 TFE_MonitoringGetCellBoolGauge0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGaugeCell*
  TFE_MonitoringGetCellBoolGauge0(TFE_MonitoringBoolGauge0* gauge);

=head2 TFE_MonitoringNewBoolGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGauge1* TFE_MonitoringNewBoolGauge1(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1);

=head2 TFE_MonitoringDeleteBoolGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteBoolGauge1(
      TFE_MonitoringBoolGauge1* gauge);

=head2 TFE_MonitoringGetCellBoolGauge1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGaugeCell*
  TFE_MonitoringGetCellBoolGauge1(TFE_MonitoringBoolGauge1* gauge,
                                  const char* label1);

=head2 TFE_MonitoringNewBoolGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGauge2* TFE_MonitoringNewBoolGauge2(
      const char* name, TF_Status* out_status, const char* description,
      const char* label1, const char* label2);

=head2 TFE_MonitoringDeleteBoolGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteBoolGauge2(
      TFE_MonitoringBoolGauge2* gauge);

=head2 TFE_MonitoringGetCellBoolGauge2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBoolGaugeCell*
  TFE_MonitoringGetCellBoolGauge2(TFE_MonitoringBoolGauge2* gauge,
                                  const char* label1, const char* label2);

=head2 TFE_MonitoringSamplerCellAdd

=over 2

  Atomically add the value of the cell.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringSamplerCellAdd(
      TFE_MonitoringSamplerCell* cell, double value);

=head2 TFE_MonitoringSamplerCellValue

=over 2

  Retrieves the current value of the cell. The return value is a HistogramProto
  saved in the buffer.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringSamplerCellValue(
      TFE_MonitoringSamplerCell* cell, TF_Buffer* buf);

=head2 TFE_MonitoringNewExponentialBuckets

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringBuckets*
  TFE_MonitoringNewExponentialBuckets(double scale, double growth_factor,
                                      int bucket_count);

=head2 TFE_MonitoringDeleteBuckets

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteBuckets(
      TFE_MonitoringBuckets* buckets);

=head2 TFE_MonitoringNewSampler0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSampler0* TFE_MonitoringNewSampler0(
      const char* name, TFE_MonitoringBuckets* buckets, TF_Status* out_status,
      const char* description);

=head2 TFE_MonitoringDeleteSampler0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteSampler0(
      TFE_MonitoringSampler0* sampler);

=head2 TFE_MonitoringGetCellSampler0

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSamplerCell* TFE_MonitoringGetCellSampler0(
      TFE_MonitoringSampler0* sampler);

=head2 TFE_MonitoringNewSampler1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSampler1* TFE_MonitoringNewSampler1(
      const char* name, TFE_MonitoringBuckets* buckets, TF_Status* out_status,
      const char* description, const char* label1);

=head2 TFE_MonitoringDeleteSampler1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteSampler1(
      TFE_MonitoringSampler1* sampler);

=head2 TFE_MonitoringGetCellSampler1

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSamplerCell* TFE_MonitoringGetCellSampler1(
      TFE_MonitoringSampler1* sampler, const char* label1);

=head2 TFE_MonitoringNewSampler2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSampler2* TFE_MonitoringNewSampler2(
      const char* name, TFE_MonitoringBuckets* buckets, TF_Status* out_status,
      const char* description, const char* label1, const char* label2);

=head2 TFE_MonitoringDeleteSampler2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_MonitoringDeleteSampler2(
      TFE_MonitoringSampler2* sampler);

=head2 TFE_MonitoringGetCellSampler2

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_MonitoringSamplerCell* TFE_MonitoringGetCellSampler2(
      TFE_MonitoringSampler2* sampler, const char* label1, const char* label2);

=head2 TFE_ContextOptionsSetTfrt

=over 2

  Sets whether to use TFRT

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextOptionsSetTfrt(TFE_ContextOptions*,
                                                       bool use_tfrt);

=head2 TFE_ContextOptionsSetTfrtDistributedRuntime

=over 2

  Sets whether to use TFRT distributed runtime

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextOptionsSetTfrtDistributedRuntime(
      TFE_ContextOptions* options, bool use_tfrt_distributed_runtime);

=head2 TFE_GetContextId

=over 2

  Returns the context_id from the EagerContext which is used by the
  EagerService to maintain consistency between client and worker. The
  context_id is initialized with a dummy value and is later set when the worker
  is initialized (either locally or remotely). The context_id can change during
  the process lifetime although this should cause the worker to be
  reinitialized (e.g. cleared caches) as well.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern uint64_t TFE_GetContextId(TFE_Context* ctx);

=head2 TFE_NewCancellationManager

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_CancellationManager* TFE_NewCancellationManager();

=head2 TFE_CancellationManagerIsCancelled

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_CancellationManagerIsCancelled(
      TFE_CancellationManager*);

=head2 TFE_CancellationManagerStartCancel

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_CancellationManagerStartCancel(
      TFE_CancellationManager*);

=head2 TFE_DeleteCancellationManager

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteCancellationManager(
      TFE_CancellationManager*);

=head2 TFE_OpSetCancellationManager

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetCancellationManager(
      TFE_Op* op, TFE_CancellationManager* cancellation_manager,
      TF_Status* status);

=head2 TFE_NewExecutor

=over 2

  Creates a new eager Executor. Nodes in one executor are guaranteed to be
  executed in sequence. Assigning nodes to different executors allows executing
  nodes in parallel.
  in_flight_nodes_limit: when is_async is true, this value controls the
  maximum number of in flight async nodes. Enqueuing of additional async ops
  after the limit is reached blocks until some inflight nodes finishes.
  The effect is bounding the memory held by inflight TensorHandles that are
  referenced by the inflight nodes.
  A recommended value has not been established.
  A value of 0 removes the limit, which is the behavior of TensorFlow 2.11.
  When is_async is false, the value is ignored.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_Executor* TFE_NewExecutor(
      bool is_async, bool enable_streaming_enqueue, int in_flight_nodes_limit);

=head2 TFE_DeleteExecutor

=over 2

  Deletes the eager Executor without waiting for enqueued nodes. Please call
  TFE_ExecutorWaitForAllPendingNodes before calling this API if you want to
  make sure all nodes are finished.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteExecutor(TFE_Executor*);

=head2 TFE_ExecutorIsAsync

=over 2

  Returns true if the executor is in async mode.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_ExecutorIsAsync(TFE_Executor*);

=head2 TFE_ExecutorWaitForAllPendingNodes

=over 2

  Causes the calling thread to block till all ops dispatched in this executor
  have been executed. Note that "execution" here refers to kernel execution /
  scheduling of copies, etc. Similar to sync execution, it doesn't guarantee
  that lower level device queues (like GPU streams) have been flushed.
  
  This call may not block for execution of ops enqueued concurrently with this
  call.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ExecutorWaitForAllPendingNodes(
      TFE_Executor*, TF_Status* status);

=head2 TFE_ExecutorClearError

=over 2

  When an error happens, any pending operations are discarded, and newly issued
  ops return an error. This call clears the error state and re-enables
  execution of newly issued ops.
  
  Note that outputs of discarded ops remain in a corrupt state and should not
  be used for future calls.
  TODO(agarwal): mark the affected handles and raise errors if they are used.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ExecutorClearError(TFE_Executor*);

=head2 TFE_ContextSetExecutorForThread

=over 2

  Sets a custom Executor for the current thread. All nodes created by this
  thread will be added to this Executor. It will override the current executor.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextSetExecutorForThread(TFE_Context*,
                                                             TFE_Executor*);

=head2 TFE_ContextGetExecutorForThread

=over 2

  Returns the Executor for the current thread.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_Executor* TFE_ContextGetExecutorForThread(
      TFE_Context*);

=head2 TFE_ContextUpdateServerDef

=over 2

  Update an existing context with a new set of servers defined in a ServerDef
  proto. Servers can be added to and removed from the list of remote workers
  in the context. A New set of servers identified by the ServerDef must be up
  when the context is updated.
  
  This API is for experimental usage and may be subject to change.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextUpdateServerDef(TFE_Context* ctx,
                                                        int keep_alive_secs,
                                                        const void* proto,
                                                        size_t proto_len,
                                                        TF_Status* status);

=head2 TFE_ContextCheckAlive

=over 2

  Checks whether a remote worker is alive or not. This will return true even if
  the context doesn't exist on the remote worker.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_ContextCheckAlive(TFE_Context* ctx,
                                                   const char* worker_name,
                                                   TF_Status* status);

=head2 TFE_ContextAsyncWait

=over 2

  Sync pending nodes in local executors (including the context default executor
  and thread executors) and streaming requests to remote executors, and get the
  combined status.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextAsyncWait(TFE_Context* ctx,
                                                  TF_Status* status);

=head2 TFE_TensorHandleDevicePointer

=over 2

  This function will block till the operation that produces `h` has
  completed. This is only valid on local TFE_TensorHandles. The pointer
  returned will be on the device in which the TFE_TensorHandle resides (so e.g.
  for a GPU tensor this will return a pointer to GPU memory). The pointer is
  only guaranteed to be valid until TFE_DeleteTensorHandle is called on this
  TensorHandle. Only supports POD data types.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void* TFE_TensorHandleDevicePointer(TFE_TensorHandle*,
                                                            TF_Status*);

=head2 TFE_TensorHandleDeviceMemorySize

=over 2

  This function will block till the operation that produces `h` has
  completed. This is only valid on local TFE_TensorHandles. Returns the size in
  bytes of the memory pointed to by the device pointer returned above.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern size_t TFE_TensorHandleDeviceMemorySize(TFE_TensorHandle*,
                                                                TF_Status*);

=head2 TFE_NewTensorHandleFromDeviceMemory

=over 2

  Creates a new TensorHandle from memory residing in the physical device
  device_name. Takes ownership of the memory, and will call deleter to release
  it after TF no longer needs it or in case of error.
  
  Custom devices must use TFE_NewCustomDeviceTensorHandle instead.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandleFromDeviceMemory(
      TFE_Context* ctx, const char* device_name, TF_DataType, const int64_t* dims,
      int num_dims, void* data, size_t len,
      void (*deallocator)(void* data, size_t len, void* arg),
      void* deallocator_arg, TF_Status* status);

=head2 TFE_HostAddressSpace

=over 2

  Retrieves the address space (i.e. job, replia, task) of the local host and
  saves it in the buffer.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_HostAddressSpace(TFE_Context* ctx,
                                                  TF_Buffer* buf);

=head2 TFE_OpGetAttrs

=over 2

  Fetch a reference to `op`'s attributes. The returned reference is only valid
  while `op` is alive.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const TFE_OpAttrs* TFE_OpGetAttrs(const TFE_Op* op);

=head2 TFE_OpAddAttrs

=over 2

  Add attributes in `attrs` to `op`.
  
  Does not overwrite or update existing attributes, but adds new ones.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpAddAttrs(TFE_Op* op, const TFE_OpAttrs* attrs);

=head2 TFE_OpAttrsSerialize

=over 2

  Serialize `attrs` as a tensorflow::NameAttrList protocol buffer (into `buf`),
  containing the op name and a map of its attributes.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpAttrsSerialize(const TFE_OpAttrs* attrs,
                                                  TF_Buffer* buf,
                                                  TF_Status* status);

=head2 TFE_OpSetAttrValueProto

=over 2

  Set an op's attribute from a serialized AttrValue protocol buffer.
  
  Analogous to TF_SetAttrValueProto for building graph operations.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_OpSetAttrValueProto(const TFE_Op* op,
                                                     const char* attr_name,
                                                     const void* proto,
                                                     size_t proto_len,
                                                     TF_Status* status);

=head2 TFE_RegisterCustomDevice

=over 2

  Registers a custom device for use with eager execution.
  
  Eager operations may be placed on this device, e.g.  `with
  tf.device("CUSTOM"):` from Python if `device_name` for this call is
  "/job:localhost/replica:0/task:0/device:CUSTOM:0".
  
  The custom device defines copy operations for moving TensorHandles on and
  off, and an execution operation for named operations. Often execution will
  simply wrap op execution on one or more physical devices.
  
  device_info is an opaque caller-defined type stored with the custom device
  which is passed to the functions referenced in the TFE_CustomDevice struct
  `device` (execute, delete_device, etc.). It can for example contain the
  names of wrapped devices.
  
  There are currently no graph semantics implemented for registered custom
  devices, so executing tf.functions which contain operations placed on the
  custom devices will fail.
  
  `device_name` must not name an existing physical or custom device. It must
  follow the format:
  
     /job:<name>/replica:<replica>/task:<task>/device:<type>:<device_num>
  
  If the device is successfully registered, `status` is set to TF_OK. Otherwise
  the device is not usable. In case of a bad status, `device.delete_device` is
  still called on `device_info` (i.e. the caller does not retain ownership).
  
  This API is highly experimental, and in particular is expected to change when
  it starts supporting operations with attributes and when tf.function support
  is added.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_RegisterCustomDevice(TFE_Context* ctx,
                                                      TFE_CustomDevice device,
                                                      const char* device_name,
                                                      void* device_info,
                                                      TF_Status* status);

=head2 TFE_IsCustomDevice

=over 2

  Returns whether `device_name` maps to a registered custom device.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_IsCustomDevice(TFE_Context* ctx,
                                                const char* device_name);

=head2 TFE_NewCustomDeviceTensorHandle

=over 2

  Creates a new TensorHandle from memory residing in a custom device. Takes
  ownership of the memory pointed to by `tensor_handle_data`, and calls
  `methods.deallocator` to release it after TF no longer needs it or in case of
  an error.
  
  This call is similar to `TFE_NewTensorHandleFromDeviceMemory`, but supports
  custom devices instead of physical devices and does not require blocking
  waiting for exact shapes.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewCustomDeviceTensorHandle(
      TFE_Context*, const char* device_name, TF_DataType, void* data,
      TFE_CustomDeviceTensorHandle methods, TF_Status* status);

=head2 TFE_ContextGetFunctionDef

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextGetFunctionDef(TFE_Context* ctx,
                                                       const char* function_name,
                                                       TF_Buffer* buf,
                                                       TF_Status* status);

=head2 TFE_AllocateHostTensor

=over 2

  Allocate and return a new Tensor on the host.
  
  The caller must set the Tensor values by writing them to the pointer returned
  by TF_TensorData with length TF_TensorByteSize.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TFE_AllocateHostTensor(TFE_Context* ctx,
                                                          TF_DataType dtype,
                                                          const int64_t* dims,
                                                          int num_dims,
                                                          TF_Status* status);

=head2 TFE_NewTensorHandleFromTensor

=over 2

  Given a Tensor, wrap it with a TensorHandle
  
  Similar to TFE_NewTensorHandle, but includes a pointer to the TFE_Context.
  The context should be identical to that of the Tensor.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT TFE_TensorHandle* TFE_NewTensorHandleFromTensor(
      TFE_Context* ctx, TF_Tensor* t, TF_Status* status);

=head2 TFE_CreatePackedTensorHandle

=over 2

  Create a packed TensorHandle with the given list of TensorHandles.
  If `handles` are on the same device, assign the same device to the packed
  handle; if `handles` are on different deivces, assign a CompositeDevice to
  it.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_CreatePackedTensorHandle(
      TFE_Context* ctx, TFE_TensorHandle** handles, int* num_handles,
      TF_Status* status);

=head2 TFE_ContextSetSoftDevicePlacement

=over 2

  Configure soft device placement policy for the eager executor. Note this
  policy is applied to any subsequent op executions.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT void TFE_ContextSetSoftDevicePlacement(TFE_Context* ctx,
                                                        unsigned char enable,
                                                        TF_Status* status);

=head2 TFE_ContextSetLogDevicePlacement

=over 2

  Configure device placement policy logging for the eager executor. Note this
  policy is applied to any subsequent op executions.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT void TFE_ContextSetLogDevicePlacement(TFE_Context* ctx,
                                                       unsigned char enable,
                                                       TF_Status* status);

=head2 TFE_ContextSetRunEagerOpAsFunction

=over 2

  Enables running eager ops as function.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT void TFE_ContextSetRunEagerOpAsFunction(TFE_Context* ctx,
                                                         unsigned char enable,
                                                         TF_Status* status);

=head2 TFE_ContextSetJitCompileRewrite

=over 2

  Enables rewrite jit_compile functions.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT void TFE_ContextSetJitCompileRewrite(TFE_Context* ctx,
                                                      unsigned char enable,
                                                      TF_Status* status);

=head2 TFE_TensorHandleDeviceType

=over 2

  Returns the device type of the operation that produced `h`.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const char* TFE_TensorHandleDeviceType(
      TFE_TensorHandle* h, TF_Status* status);

=head2 TFE_TensorHandleDeviceID

=over 2

  Returns the device ID of the operation that produced `h`.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int TFE_TensorHandleDeviceID(TFE_TensorHandle* h,
                                                     TF_Status* status);

=head2 TFE_TensorHandleGetStatus

=over 2

  Returns the status for the tensor handle. In TFRT, a tensor handle can carry
  error info if error happens. If so, the status will be set with the error
  info. If not, status will be set as OK.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_TensorHandleGetStatus(TFE_TensorHandle* h,
                                                       TF_Status* status);

=head2 TFE_GetExecutedOpNames

=over 2

  Get a comma-separated list of op names executed in graph functions dispatched
  to `ctx`. This feature is currently only enabled for TFRT debug builds, for
  performance and simplicity reasons.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_GetExecutedOpNames(TFE_Context* ctx,
                                                    TF_Buffer* buf,
                                                    TF_Status* status);

=head2 TFE_SetLogicalCpuDevices

=over 2

  Set logical devices to the context's device manager.
  If logical devices are already configured at context initialization
  through TFE_ContextOptions, this method should not be called.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_SetLogicalCpuDevices(TFE_Context* ctx,
                                                      int num_cpus,
                                                      const char* prefix,
                                                      TF_Status* status);

=head2 TFE_InsertConfigKeyValue

=over 2

  Set configuration key and value using coordination service.
  If coordination service is enabled, the key-value will be stored on the
  leader and become accessible to all workers in the cluster.
  Currently, a config key can only be set with one value, and subsequently
  setting the same key will lead to errors.
  
  Note that the key-values are only expected to be used for cluster
  configuration data, and should not be used for storing a large amount of data
  or being accessed very frequently.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_InsertConfigKeyValue(TFE_Context* ctx,
                                                      const char* key,
                                                      const char* value,
                                                      TF_Status* status);

=head2 TFE_GetConfigKeyValue

=over 2

  Get configuration key and value using coordination service.
  The config key must be set before getting its value. Getting value of
  non-existing config keys will result in errors.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_GetConfigKeyValue(TFE_Context* ctx,
                                                   const char* key,
                                                   TF_Buffer* value_buf,
                                                   TF_Status* status);

=head2 TFE_DeleteConfigKeyValue

=over 2

  Delete configuration key-value. If `key` is a directory, recursively clean up
  all key-values under the path specified by `key`.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_DeleteConfigKeyValue(TFE_Context* ctx,
                                                      const char* key,
                                                      TF_Status* status);

=head2 TFE_ReportErrorToCluster

=over 2

  Report error (specified by error_code and error_message) to other tasks in
  the cluster.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ReportErrorToCluster(TFE_Context* ctx,
                                                      int error_code,
                                                      const char* error_message,
                                                      TF_Status* status);

=head2 TFE_GetTaskStates

=over 2

  Get task states from the Coordination Service.

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_GetTaskStates(TFE_Context* ctx,
                                               const TF_Buffer& tasks,
                                               void* states, TF_Status* status);

=head2 TFE_WaitAtBarrier

=over 2

=back

  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_WaitAtBarrier(TFE_Context* ctx,
                                               const char* barrier_id,
                                               int64_t barrier_timeout_in_ms,
                                               TF_Status* status);

=head2 TF_GetNodesToPreserveListSize

=over 2

  Get a set of node names that must be preserved. They can not be transformed
  or removed during the graph transformation. This includes feed and fetch
  nodes, keep_ops, init_ops. Fills in `num_values` and `storage_size`, they
  will be used in `TF_GetNodesToPreserveList`.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetNodesToPreserveListSize(
      const TF_GrapplerItem* item, int* num_values, size_t* storage_size,
      TF_Status* status);

=head2 TF_GetNodesToPreserveList

=over 2

  Get a set of node names that must be preserved. They can not be transformed
  or removed during the graph transformation. This includes feed and fetch
  nodes, keep_ops, init_ops. Fills in `values` and `lengths`, each of which
  must point to an array of length at least `num_values`.
  
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  `num_values` and `storage` can be
  obtained from TF_GetNodesToPreserveSize
  
  Fails if storage_size is too small to hold the requested number of strings.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetNodesToPreserveList(
      const TF_GrapplerItem* item, char** values, size_t* lengths, int num_values,
      void* storage, size_t storage_size, TF_Status* status);

=head2 TF_GetFetchNodesListSize

=over 2

  Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
  they will be used in `TF_GetFetchNodesList`

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetFetchNodesListSize(const TF_GrapplerItem* item,
                                                      int* num_values,
                                                      size_t* storage_size,
                                                      TF_Status* status);

=head2 TF_GetFetchNodesList

=over 2

  Get a set of node names for fetch nodes. Fills in `values` and `lengths`,
  each of which must point to an array of length at least `num_values`.
  
  The elements of values will point to addresses in `storage` which must be at
  least `storage_size` bytes in length.  `num_values` and `storage` can be
  obtained from TF_GetFetchNodesSize
  
  Fails if storage_size is too small to hold the requested number of strings.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetFetchNodesList(const TF_GrapplerItem* item,
                                                  char** values, size_t* lengths,
                                                  int num_values, void* storage,
                                                  size_t storage_size,
                                                  TF_Status* status);

=head2 TF_NewGraphProperties

=over 2

  Create GraphProperties. The item must outlive the properties.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern TF_GraphProperties* TF_NewGraphProperties(
      const TF_GrapplerItem* item);

=head2 TF_DeleteGraphProperties

=over 2

  Delete GraphProperties.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_DeleteGraphProperties(
      TF_GraphProperties* graph_properties);

=head2 TF_InferStatically

=over 2

  Infer tensor shapes through abstract interpretation.
  If assume_valid_feeds is true, it can help infer shapes in the fanout of fed
  nodes. This may cause incorrectness in graph analyses, but is useful for
  simulation or scheduling.
  If aggressive_shape_inference is true, nodes are executed on the host to
  identify output values when possible and does other aggressive strategies.
  This may cause incorrectness in graph analyses, but is useful for simulation
  or scheduling.
  If include_input_tensor_values is true, the values of constant
  tensors will included in the input properties.
  If include_output_tensor_values is true, the values of constant tensors will
  be included in the output properties.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_InferStatically(
      TF_GraphProperties* graph_properties, TF_Bool assume_valid_feeds,
      TF_Bool aggressive_shape_inference, TF_Bool include_input_tensor_values,
      TF_Bool include_output_tensor_values, TF_Status* s);

=head2 TF_GetInputPropertiesListSize

=over 2

  Get the size of input OpInfo::TensorProperties given node name.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetInputPropertiesListSize(
      TF_GraphProperties* graph_properties, const char* name, int* num_values,
      TF_Status* status);

=head2 TF_GetOutputPropertiesListSize

=over 2

  Get the size of output OpInfo::TensorProperties given node name.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetOutputPropertiesListSize(
      TF_GraphProperties* graph_properties, const char* name, int* num_values,
      TF_Status* status);

=head2 TF_GetInputPropertiesList

=over 2

  Get a list of input OpInfo::TensorProperties given node name.
  Return the serialized list `properties`.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetInputPropertiesList(
      TF_GraphProperties* graph_properties, const char* name,
      TF_Buffer** properties, int num_values, TF_Status* status);

=head2 TF_GetOutputPropertiesList

=over 2

  Get a list of output OpInfo::TensorProperties given node name.
  Return the serialized list `properties`.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_GetOutputPropertiesList(
      TF_GraphProperties* graph_properties, const char* name,
      TF_Buffer** properties, int num_values, TF_Status* status);

=head2 TF_NewFunctionLibraryDefinition

=over 2

  Create NewFunctionLibraryDefinition.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern TF_FunctionLibraryDefinition*
  TF_NewFunctionLibraryDefinition(const TF_Buffer* graph_buf, TF_Status* status);

=head2 TF_DeleteFunctionLibraryDefinition

=over 2

  Delete NewFunctionLibraryDefinition.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_DeleteFunctionLibraryDefinition(
      TF_FunctionLibraryDefinition* fn_lib);

=head2 TF_LookUpOpDef

=over 2

  Shorthand for calling LookUp to get the OpDef from FunctionLibraryDefinition
  given op name. The returned OpDef is represented by TF_Buffer.

=back

  /* From <tensorflow/c/experimental/grappler/grappler.h> */
  TF_CAPI_EXPORT extern void TF_LookUpOpDef(TF_FunctionLibraryDefinition* fn_lib,
                                            const char* name, TF_Buffer* buf,
                                            TF_Status* s);

=head2 TF_TensorSpecDataType

=over 2

  Returns the dtype associated with the TensorSpec.

=back

  /* From <tensorflow/c/experimental/saved_model/public/tensor_spec.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_TensorSpecDataType(
      const TF_TensorSpec* spec);

=head2 TF_TensorSpecShape

=over 2

  Returns the shape associated with the TensorSpec. The returned Shape is not
  owned by the caller. Caller must not call TF_DeleteShape on the returned
  shape.

=back

  /* From <tensorflow/c/experimental/saved_model/public/tensor_spec.h> */
  TF_CAPI_EXPORT extern const TF_Shape* TF_TensorSpecShape(
      const TF_TensorSpec* spec);

=head2 TF_InitPlugin

=over 2

  /// Initializes a TensorFlow plugin.
  ///
  /// Must be implemented by the plugin DSO. It is called by TensorFlow runtime.
  ///
  /// Filesystem plugins can be loaded on demand by users via
  /// `Env::LoadLibrary` or during TensorFlow's startup if they are on certain
  /// paths (although this has a security risk if two plugins register for the
  /// same filesystem and the malicious one loads before the legimitate one -
  /// but we consider this to be something that users should care about and
  /// manage themselves). In both of these cases, core TensorFlow looks for
  /// the `TF_InitPlugin` symbol and calls this function.
  ///
  /// For every filesystem URI scheme that this plugin supports, the plugin must
  /// add one `TF_FilesystemPluginInfo` entry in `plugin_info->ops` and call
  /// `TF_SetFilesystemVersionMetadata` for that entry.
  ///
  /// Plugins must also initialize `plugin_info->plugin_memory_allocate` and
  /// `plugin_info->plugin_memory_free` to ensure memory allocated by plugin is
  /// freed in a compatible way.

=back

  /* From <tensorflow/c/experimental/filesystem/filesystem_interface.h> */
  TF_CAPI_EXPORT extern void TF_InitPlugin(TF_FilesystemPluginInfo* plugin_info);

=head2 TF_LoadSavedModel

=over 2

  Load a SavedModel from `dirname`. We expect the SavedModel to contain a
  single Metagraph (as for those exported from TF2's `tf.saved_model.save`).
  
  Params:
   dirname - A directory filepath that the SavedModel is at.
   ctx - A TFE_Context containing optional load/TF runtime options.
         `ctx` must outlive the returned TF_SavedModel pointer.
   status - Set to OK on success and an appropriate error on failure.
  Returns:
   If status is not OK, returns nullptr. Otherwise, returns a newly created
   TF_SavedModel instance. It must be deleted by calling TF_DeleteSavedModel.

=back

  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern TF_SavedModel* TF_LoadSavedModel(const char* dirname,
                                                         TFE_Context* ctx,
                                                         TF_Status* status);

=head2 TF_LoadSavedModelWithTags

=over 2

  Load a SavedModel from `dirname`.
  
  Params:
   dirname - A directory filepath that the SavedModel is at.
   ctx - A TFE_Context containing optional load/TF runtime options.
         `ctx` must outlive the returned TF_SavedModel pointer.
   tags - char* array of SavedModel tags. We will load the metagraph matching
          the tags.
   tags_len - number of elements in the `tags` array.
   status - Set to OK on success and an appropriate error on failure.
  Returns:
   If status is not OK, returns nullptr. Otherwise, returns a newly created
   TF_SavedModel instance. It must be deleted by calling TF_DeleteSavedModel.

=back

  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern TF_SavedModel* TF_LoadSavedModelWithTags(
      const char* dirname, TFE_Context* ctx, const char* const* tags,
      int tags_len, TF_Status* status);

=head2 TF_DeleteSavedModel

=over 2

  Deletes a TF_SavedModel, and frees any resources owned by it.

=back

  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern void TF_DeleteSavedModel(TF_SavedModel* model);

=head2 TF_GetSavedModelConcreteFunction

=over 2

  Retrieve a function from the TF2 SavedModel via function path.
  
  Params:
   model - The TF2 SavedModel to load a function from.
   function_path - A string containing the path from the root saved python
                   object to a tf.function method.
                   TODO(bmzhao): Add a detailed example of this with a
                   python tf.module before moving this out of experimental.
   status - Set to OK on success and an appropriate error on failure.
  Returns:
   If status is not OK, returns nullptr. Otherwise, returns a
   TF_ConcreteFunction instance. The lifetime of this instance is
   "conceptually" bound to `model`. Once `model` is deleted, all
   `TF_ConcreteFunctions` retrieved from it are invalid, and have been deleted.

=back

  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern TF_ConcreteFunction* TF_GetSavedModelConcreteFunction(
      TF_SavedModel* model, const char* function_path, TF_Status* status);

=head2 TF_GetSavedModelSignatureDefFunction

=over 2

  Retrieve a function from the TF SavedModel via a SignatureDef key.
  
  Params:
   model - The SavedModel to load a function from.
   signature_def_key - The string key of the SignatureDef map of a SavedModel:
                       https://github.com/tensorflow/tensorflow/blob/69b08900b1e991d84bce31f3b404f5ed768f339f/tensorflow/core/protobuf/meta_graph.proto#L89
   status - Set to OK on success and an appropriate error on failure.
  Returns:
   If status is not OK, returns nullptr. Otherwise, returns a
   TF_SignatureDefFunction instance. Once `model` is deleted, all
   `TF_SignatureDefFunctions` retrieved from it are invalid, and have been
   deleted.

=back

  /* From <tensorflow/c/experimental/saved_model/public/saved_model_api.h> */
  TF_CAPI_EXPORT extern TF_SignatureDefFunction*
  TF_GetSavedModelSignatureDefFunction(TF_SavedModel* model,
                                       const char* signature_def_key,
                                       TF_Status* status);

=head2 TF_ConcreteFunctionGetMetadata

=over 2

  Returns FunctionMetadata associated with `func`. Metadata's lifetime is
  bound to `func`, which is bound to the TF_SavedModel it was loaded from.

=back

  /* From <tensorflow/c/experimental/saved_model/public/concrete_function.h> */
  TF_CAPI_EXPORT extern TF_FunctionMetadata* TF_ConcreteFunctionGetMetadata(
      TF_ConcreteFunction* func);

=head2 TF_ConcreteFunctionMakeCallOp

=over 2

  Returns a TFE_Op suitable for executing this function. Caller must provide
  all function inputs in `inputs`, and must not add any additional inputs on
  the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
  The caller is responsible for deleting the returned TFE_Op. If op
  construction fails, `status` will be non-OK and the returned pointer will be
  null.
  TODO(bmzhao): Remove this function in a subsequent change; Design + implement
  a Function Execution interface for ConcreteFunction that accepts a tagged
  union of types (tensorflow::Value). This effectively requires moving much of
  the implementation of function.py/def_function.py to C++, and exposing a
  high-level API here. A strawman for what this interface could look like:
  TF_Value* TF_ExecuteFunction(TFE_Context*, TF_ConcreteFunction*, TF_Value*
  inputs, int num_inputs, TF_Status* status);

=back

  /* From <tensorflow/c/experimental/saved_model/public/concrete_function.h> */
  TF_CAPI_EXPORT extern TFE_Op* TF_ConcreteFunctionMakeCallOp(
      TF_ConcreteFunction* func, TFE_TensorHandle** inputs, int num_inputs,
      TF_Status* status);

=head2 TF_SignatureDefParamName

=over 2

  Returns the name of the given parameter. The caller is not responsible for
  freeing the returned char*.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param.h> */
  TF_CAPI_EXPORT extern const char* TF_SignatureDefParamName(
      const TF_SignatureDefParam* param);

=head2 TF_SignatureDefParamTensorSpec

=over 2

  Returns the TensorSpec associated with the given parameter. The caller is
  not reponsible for freeing the returned TF_TensorSpec*.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param.h> */
  TF_CAPI_EXPORT extern const TF_TensorSpec* TF_SignatureDefParamTensorSpec(
      const TF_SignatureDefParam* param);

=head2 TF_SignatureDefFunctionGetMetadata

=over 2

  Returns FunctionMetadata associated with `func`. Metadata's lifetime is
  bound to `func`, which is bound to the TF_SavedModel it was loaded from.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function.h> */
  TF_CAPI_EXPORT extern TF_SignatureDefFunctionMetadata*
  TF_SignatureDefFunctionGetMetadata(TF_SignatureDefFunction* func);

=head2 TF_SignatureDefFunctionMakeCallOp

=over 2

  Returns a TFE_Op suitable for executing this function. Caller must provide
  all function inputs in `inputs`, and must not add any additional inputs on
  the returned op. (i.e. don't call TFE_OpAddInput or TFE_OpAddInputList).
  The caller is responsible for deleting the returned TFE_Op. If op
  construction fails, `status` will be non-OK and the returned pointer will be
  null.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function.h> */
  TF_CAPI_EXPORT extern TFE_Op* TF_SignatureDefFunctionMakeCallOp(
      TF_SignatureDefFunction* func, TFE_TensorHandle** inputs, int num_inputs,
      TF_Status* status);

=head2 TF_ConcreteFunctionListSize

=over 2

  Returns the size of `list`.

=back

  /* From <tensorflow/c/experimental/saved_model/public/concrete_function_list.h> */
  TF_CAPI_EXPORT extern size_t TF_ConcreteFunctionListSize(
      TF_ConcreteFunctionList* list);

=head2 TF_ConcreteFunctionListGet

=over 2

  Returns the `i`th TF_ConcreteFunction in the list.

=back

  /* From <tensorflow/c/experimental/saved_model/public/concrete_function_list.h> */
  TF_CAPI_EXPORT extern TF_ConcreteFunction* TF_ConcreteFunctionListGet(
      TF_ConcreteFunctionList* list, int i);

=head2 TF_DeleteConcreteFunctionList

=over 2

  Deletes `list`.

=back

  /* From <tensorflow/c/experimental/saved_model/public/concrete_function_list.h> */
  TF_CAPI_EXPORT extern void TF_DeleteConcreteFunctionList(
      TF_ConcreteFunctionList* list);

=head2 TF_SignatureDefParamListSize

=over 2

  Returns the size of `list`.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param_list.h> */
  TF_CAPI_EXPORT extern size_t TF_SignatureDefParamListSize(
      const TF_SignatureDefParamList* list);

=head2 TF_SignatureDefParamListGet

=over 2

  Returns the `i`th TF_SignatureDefParam in the list.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_param_list.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParam* TF_SignatureDefParamListGet(
      const TF_SignatureDefParamList* list, int i);

=head2 TF_SignatureDefFunctionMetadataArgs

=over 2

  Retrieves the arguments of the SignatureDefFunction. The caller is not
  responsible for freeing the returned pointer.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
  TF_SignatureDefFunctionMetadataArgs(
      const TF_SignatureDefFunctionMetadata* list);

=head2 TF_SignatureDefFunctionMetadataReturns

=over 2

  Retrieves the returns of the SignatureDefFunction. The caller is not
  responsible for freeing the returned pointer.

=back

  /* From <tensorflow/c/experimental/saved_model/public/signature_def_function_metadata.h> */
  TF_CAPI_EXPORT extern const TF_SignatureDefParamList*
  TF_SignatureDefFunctionMetadataReturns(
      const TF_SignatureDefFunctionMetadata* list);

=head2 TF_EnableXLACompilation

=over 2

  When `enable` is true, set
  tensorflow.ConfigProto.OptimizerOptions.global_jit_level to ON_1, and also
  set XLA flag values to prepare for XLA compilation. Otherwise set
  global_jit_level to OFF.
  
  This and the next API are syntax sugar over TF_SetConfig(), and is used by
  clients that cannot read/write the tensorflow.ConfigProto proto.
  TODO: Migrate to TF_CreateConfig() below.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_EnableXLACompilation(TF_SessionOptions* options,
                                                     unsigned char enable);

=head2 TF_SetXlaEnableLazyCompilation

=over 2

  Set XLA's internal BuildXlaOpsPassFlags.tf_xla_enable_lazy_compilation to the
  value of 'enabled'. Also returns the original value of that flag.
  
  Use in tests to allow XLA to fallback to TF classic. This has global effect.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT unsigned char TF_SetXlaEnableLazyCompilation(
      unsigned char enable);

=head2 TF_SetTfXlaCpuGlobalJit

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT unsigned char TF_SetTfXlaCpuGlobalJit(unsigned char enable);

=head2 TF_SetXlaAutoJitMode

=over 2

  Sets XLA's auto jit mode according to the specified string, which is parsed
  as if passed in XLA_FLAGS. This has global effect.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT void TF_SetXlaAutoJitMode(const char* mode);

=head2 TF_GetXlaAutoJitEnabled

=over 2

  Returns whether the single GPU or general XLA auto jit optimizations are
  enabled through MarkForCompilationPassFlags.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT unsigned char TF_GetXlaAutoJitEnabled();

=head2 TF_SetXlaMinClusterSize

=over 2

  Sets XLA's minimum cluster size. This has global effect.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT void TF_SetXlaMinClusterSize(int size);

=head2 TF_GetXlaConstantFoldingDisabled

=over 2

  Gets/Sets TF/XLA flag for whether(true) or not(false) to disable constant
  folding. This is for testing to ensure that XLA is being tested rather than
  Tensorflow's CPU implementation through constant folding.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT unsigned char TF_GetXlaConstantFoldingDisabled();

=head2 TF_SetXlaConstantFoldingDisabled

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT void TF_SetXlaConstantFoldingDisabled(
      unsigned char should_enable);

=head2 TF_CreateConfig

=over 2

  Create a serialized tensorflow.ConfigProto proto, where:
  
  a) ConfigProto.optimizer_options.global_jit_level is set to ON_1 if
  `enable_xla_compilation` is non-zero, and OFF otherwise.
  b) ConfigProto.gpu_options.allow_growth is set to `gpu_memory_allow_growth`.
  c) ConfigProto.device_count is set to `num_cpu_devices`.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_CreateConfig(
      unsigned char enable_xla_compilation, unsigned char gpu_memory_allow_growth,
      unsigned int num_cpu_devices);

=head2 TF_CreateRunOptions

=over 2

  Create a serialized tensorflow.RunOptions proto, where RunOptions.trace_level
  is set to FULL_TRACE if `enable_full_trace` is non-zero, and NO_TRACE
  otherwise.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Buffer* TF_CreateRunOptions(
      unsigned char enable_full_trace);

=head2 TF_GraphDebugString

=over 2

  Returns the graph content in a human-readable format, with length set in
  `len`. The format is subject to change in the future.
  The returned string is heap-allocated, and caller should call free() on it.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const char* TF_GraphDebugString(TF_Graph* graph,
                                                        size_t* len);

=head2 TF_FunctionDebugString

=over 2

  Returns the function content in a human-readable format, with length set in
  `len`. The format is subject to change in the future.
  The returned string is heap-allocated, and caller should call free() on it.
  
  Do not return const char*, because some foreign language binding
  (e.g. swift) cannot then call free() on the returned pointer.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern char* TF_FunctionDebugString(TF_Function* func,
                                                     size_t* len);

=head2 TF_DequeueNamedTensor

=over 2

  Caller must call TF_DeleteTensor() over the returned tensor. If the queue is
  empty, this call is blocked.
  
  Tensors are enqueued via the corresponding TF enqueue op.
  TODO(hongm): Add support for `timeout_ms`.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_DequeueNamedTensor(TF_Session* session,
                                                         int tensor_id,
                                                         TF_Status* status);

=head2 TF_EnqueueNamedTensor

=over 2

  On success, enqueues `tensor` into a TF-managed FifoQueue given by
  `tensor_id`, associated with `session`. There must be a graph node named
  "fifo_queue_enqueue_<tensor_id>", to be executed by this API call. It reads
  from a placeholder node "arg_tensor_enqueue_<tensor_id>".
  
  `tensor` is still owned by the caller. This call will be blocked if the queue
  has reached its capacity, and will be unblocked when the queued tensors again
  drop below the capacity due to dequeuing.
  
  Tensors are dequeued via the corresponding TF dequeue op.
  TODO(hongm): Add support for `timeout_ms`.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_EnqueueNamedTensor(TF_Session* session,
                                                   int tensor_id,
                                                   TF_Tensor* tensor,
                                                   TF_Status* status);

=head2 TF_MakeInternalErrorStatus

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_MakeInternalErrorStatus(TF_Status* status,
                                                        const char* errMsg);

=head2 TF_NewCheckpointReader

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_CheckpointReader* TF_NewCheckpointReader(
      const char* filename, TF_Status* status);

=head2 TF_DeleteCheckpointReader

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_DeleteCheckpointReader(
      TF_CheckpointReader* reader);

=head2 TF_CheckpointReaderHasTensor

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int TF_CheckpointReaderHasTensor(
      TF_CheckpointReader* reader, const char* name);

=head2 TF_CheckpointReaderGetVariable

=over 2

  Get the variable name at the given index

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const char* TF_CheckpointReaderGetVariable(
      TF_CheckpointReader* reader, int index);

=head2 TF_CheckpointReaderSize

=over 2

  Get the number of variable in the checkpoint

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int TF_CheckpointReaderSize(TF_CheckpointReader* reader);

=head2 TF_CheckpointReaderGetVariableDataType

=over 2

  Get the DataType of a variable

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_DataType TF_CheckpointReaderGetVariableDataType(
      TF_CheckpointReader* reader, const char* name);

=head2 TF_CheckpointReaderGetVariableShape

=over 2

  Read the shape of a variable and write to `dims`

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_CheckpointReaderGetVariableShape(
      TF_CheckpointReader* reader, const char* name, int64_t* dims, int num_dims,
      TF_Status* status);

=head2 TF_CheckpointReaderGetVariableNumDims

=over 2

  Get the number of dimension of a variable

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int TF_CheckpointReaderGetVariableNumDims(
      TF_CheckpointReader* reader, const char* name);

=head2 TF_CheckpointReaderGetTensor

=over 2

  Load the weight of a variable

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Tensor* TF_CheckpointReaderGetTensor(
      TF_CheckpointReader* reader, const char* name, TF_Status* status);

=head2 TF_NewAttrBuilder

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_AttrBuilder* TF_NewAttrBuilder(const char* op_name);

=head2 TF_DeleteAttrBuilder

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_DeleteAttrBuilder(TF_AttrBuilder* builder);

=head2 TF_AttrBuilderSetType

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AttrBuilderSetType(TF_AttrBuilder* builder,
                                                   const char* attr_name,
                                                   TF_DataType value);

=head2 TF_AttrBuilderSetTypeList

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AttrBuilderSetTypeList(TF_AttrBuilder* builder,
                                                       const char* attr_name,
                                                       const TF_DataType* values,
                                                       int num_values);

=head2 TF_AttrBuilderCheckCanRunOnDevice

=over 2

  Checks the tensorflow::NodeDef built via the methods above to see if it can
  run on device_type.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_AttrBuilderCheckCanRunOnDevice(
      TF_AttrBuilder* builder, const char* device_type, TF_Status* status);

=head2 TF_GetNumberAttrForOpListInput

=over 2

  For argument number input_index, fetch the corresponding number_attr that
  needs to be updated with the argument length of the input list.
  Returns nullptr if there is any problem like op_name is not found, or the
  argument does not support this attribute type.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern const char* TF_GetNumberAttrForOpListInput(
      const char* op_name, int input_index, TF_Status* status);

=head2 TF_OpIsStateful

=over 2

  Returns 1 if the op is stateful, 0 otherwise. The return value is undefined
  if the status is not ok.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern int TF_OpIsStateful(const char* op_type,
                                            TF_Status* status);

=head2 TF_InitMain

=over 2

  Platform specific initialization routine. Very few platforms actually require
  this to be called.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT void TF_InitMain(const char* usage, int* argc, char*** argv);

=head2 TF_PickUnusedPortOrDie

=over 2

  Platform-specific implementation to return an unused port. (This should used
  in tests only.)

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT int TF_PickUnusedPortOrDie(void);

=head2 TFE_NewTensorHandleFromScalar

=over 2

  Fast path method that makes constructing a single scalar tensor require less
  overhead and copies.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_NewTensorHandleFromScalar(
      TF_DataType data_type, void* data, size_t len, TF_Status* status);

=head2 TFE_EnableCollectiveOps

=over 2

  Specify the server_def that enables collective ops.
  This is different to the above function in that it doesn't create remote
  contexts, and remotely executing ops is not possible. It just enables
  communication for collective ops.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_EnableCollectiveOps(TFE_Context* ctx,
                                                     const void* proto,
                                                     size_t proto_len,
                                                     TF_Status* status);

=head2 TFE_AbortCollectiveOps

=over 2

  Aborts all ongoing collectives with the specified status. After abortion,
  subsequent collectives will error with this status immediately. To reset the
  collectives, create a new EagerContext.
  
  This is intended to be used when a peer failure is detected.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_AbortCollectiveOps(TFE_Context* ctx,
                                                    TF_Status* status);

=head2 TFE_CollectiveOpsCheckPeerHealth

=over 2

  Checks the health of collective ops peers. Explicit health check is needed in
  multi worker collective ops to detect failures in the cluster.  If a peer is
  down, collective ops may hang.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_CollectiveOpsCheckPeerHealth(
      TFE_Context* ctx, const char* task, int64_t timeout_in_ms,
      TF_Status* status);

=head2 TF_NewShapeAndTypeList

=over 2

  API for manipulating TF_ShapeAndTypeList objects.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_ShapeAndTypeList* TF_NewShapeAndTypeList(
      int num_shapes);

=head2 TF_ShapeAndTypeListSetShape

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ShapeAndTypeListSetShape(
      TF_ShapeAndTypeList* shape_list, int index, const int64_t* dims,
      int num_dims);

=head2 TF_ShapeAndTypeListSetUnknownShape

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ShapeAndTypeListSetUnknownShape(
      TF_ShapeAndTypeList* shape_list, int index);

=head2 TF_ShapeAndTypeListSetDtype

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_ShapeAndTypeListSetDtype(
      TF_ShapeAndTypeList* shape_list, int index, TF_DataType dtype);

=head2 TF_DeleteShapeAndTypeList

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_DeleteShapeAndTypeList(
      TF_ShapeAndTypeList* shape_list);

=head2 TF_DeleteShapeAndTypeListArray

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_DeleteShapeAndTypeListArray(
      TF_ShapeAndTypeList** shape_list_array, int num_items);

=head2 TFE_InferShapes

=over 2

  Infer shapes for the given `op`. The arguments mimic the arguments of the
  `shape_inference::InferenceContext` constructor. Note the following:
    - The inputs of the `op` are not used for shape inference. So, it is
      OK to not have the inputs properly set in `op`. See `input_tensors`
      if you want shape inference to consider the input tensors of the
      op for shape inference.
    - The types need not be set in `input_shapes` as it is not used.
    - The number of `input_tensors` should be the same as the number of items
      in `input_shapes`.
  
  The results are returned in `output_shapes` and
  `output_resource_shapes_and_types`. The caller is responsible for freeing the
  memory in these buffers by calling `TF_DeleteShapeAndTypeList`.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_InferShapes(
      TFE_Op* op, TF_ShapeAndTypeList* input_shapes, TF_Tensor** input_tensors,
      TF_ShapeAndTypeList* input_tensor_as_shapes,
      TF_ShapeAndTypeList** input_resource_shapes_and_types,
      TF_ShapeAndTypeList** output_shapes,
      TF_ShapeAndTypeList*** output_resource_shapes_and_types, TF_Status* status);

=head2 TF_ImportGraphDefOptionsSetValidateColocationConstraints

=over 2

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void
  TF_ImportGraphDefOptionsSetValidateColocationConstraints(
      TF_ImportGraphDefOptions* opts, unsigned char enable);

=head2 TF_LoadPluggableDeviceLibrary

=over 2

  Load the library specified by library_filename and register the pluggable
  device and related kernels present in that library. This function is not
  supported on embedded on mobile and embedded platforms and will fail if
  called.
  
  Pass "library_filename" to a platform-specific mechanism for dynamically
  loading a library. The rules for determining the exact location of the
  library are platform-specific and are not documented here.
  
  On success, returns the newly created library handle and places OK in status.
  The caller owns the library handle.
  
  On failure, returns nullptr and places an error status in status.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TF_Library* TF_LoadPluggableDeviceLibrary(
      const char* library_filename, TF_Status* status);

=head2 TF_DeletePluggableDeviceLibraryHandle

=over 2

  Frees the memory associated with the library handle.
  Does NOT unload the library.

=back

  /* From <tensorflow/c/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TF_DeletePluggableDeviceLibraryHandle(
      TF_Library* lib_handle);

=head1 SEE ALSO

L<https://github.com/tensorflow/tensorflow/tree/master/tensorflow/c>

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod  view on Meta::CPAN

allocate enough memory, it will crash with an out-of-memory (OOM) error. This
is typical when running multiple programs that both use the GPU.

If you have multiple GPUs, you can control which GPUs your program can access
by using the
L<C<CUDA_VISIBLE_DEVICES> environment variable|https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars>
provided by the underlying CUDA library. This is typically
done by setting the variable in a C<BEGIN> block before loading
L<AI::TensorFlow::Libtensorflow>:

  BEGIN {
      # Set the specific GPU device that is available
      # to this program to GPU index 0, which is the
      # first GPU as listed in the output of `nvidia-smi`.
      $ENV{CUDA_VISIBLE_DEVICES} = '0';
      require AI::TensorFlow::Libtensorflow;
  }

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

use utf8;
use constant IN_IPERL => !! $ENV{PERL_IPERL_RUNNING};
no if IN_IPERL, warnings => 'redefine'; # fewer messages when re-running cells

use feature qw(say state postderef);
use Syntax::Construct qw(each-array);

use lib::projectroot qw(lib);

BEGIN {
    if( IN_IPERL ) {
        $ENV{TF_CPP_MIN_LOG_LEVEL} = 3;
    }
    require AI::TensorFlow::Libtensorflow;
}

use URI ();
use HTTP::Tiny ();
use Path::Tiny qw(path);

use File::Which ();

use List::Util 1.56 qw(mesh);

use Data::Printer ( output => 'stderr', return_value => 'void', filters => ['PDL'] );
use Data::Printer::Filter::PDL ();
use Text::Table::Tiny qw(generate_table);

use Imager;

my $s = AI::TensorFlow::Libtensorflow::Status->New;
sub AssertOK {
    die "Status $_[0]: " . $_[0]->Message
        unless $_[0]->GetCode == AI::TensorFlow::Libtensorflow::Status::OK;
    return;
}
AssertOK($s);

use PDL;
use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT UINT8);

use FFI::Platypus::Memory qw(memcpy);
use FFI::Platypus::Buffer qw(scalar_to_pointer);

sub FloatPDLTOTFTensor {
    my ($p) = @_;
    return AI::TensorFlow::Libtensorflow::Tensor->New(
        FLOAT, [ reverse $p->dims ], $p->get_dataref, sub { undef $p }
    );
}

sub FloatTFTensorToPDL {
    my ($t) = @_;

    my $pdl = zeros(float,reverse( map $t->Dim($_), 0..$t->NumDims-1 ) );

    memcpy scalar_to_pointer( ${$pdl->get_dataref} ),
        scalar_to_pointer( ${$t->Data} ),
        $t->ByteSize;
    $pdl->upd_data;

    $pdl;
}

sub Uint8PDLTOTFTensor {
    my ($p) = @_;
    return AI::TensorFlow::Libtensorflow::Tensor->New(
        UINT8, [ reverse $p->dims ], $p->get_dataref, sub { undef $p }
    );
}

sub Uint8TFTensorToPDL {
    my ($t) = @_;

    my $pdl = zeros(byte,reverse( map $t->Dim($_), 0..$t->NumDims-1 ) );

    memcpy scalar_to_pointer( ${$pdl->get_dataref} ),
        scalar_to_pointer( ${$t->Data} ),
        $t->ByteSize;
    $pdl->upd_data;

    $pdl;
}

# image_size => [width, height] (but usually square images)
my %model_name_to_params = (
    centernet_hourglass_512x512 => {
        handle => 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
        image_size => [ 512, 512 ],
    },
);

my $model_name = 'centernet_hourglass_512x512';

say "Selected model: $model_name : $model_name_to_params{$model_name}{handle}";

my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
$model_uri->query_form( 'tf-hub-format' => 'compressed' );
my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
my $model_archive_path = "${model_base}.tar.gz";

my $http = HTTP::Tiny->new;

for my $download ( [ $model_uri  => $model_archive_path ],) {
    my ($uri, $path) = @$download;
    say "Downloading $uri to $path";
    next if -e $path;
    $http->mirror( $uri, $path );
}

use Archive::Extract;
my $ae = Archive::Extract->new( archive => $model_archive_path );
die "Could not extract archive" unless $ae->extract( to => $model_base );

my $saved_model = path($model_base)->child('saved_model.pb');
say "Saved model is in $saved_model" if -f $saved_model;

# Get the labels
my $response = $http->get('https://raw.githubusercontent.com/tensorflow/models/a4944a57ad2811e1f6a7a87589a9fc8a776e8d3c/object_detection/data/mscoco_label_map.pbtxt');

my %labels_map = $response->{content} =~ m<
(?:item \s+ \{  \s+
  \Qname:\E \s+ "[^"]+" \s+
  \Qid:\E   \s+ (\d+) \s+
  \Qdisplay_name:\E \s+ "([^"]+)" \s+
})+
>sgx;

my $label_count = List::Util::max keys %labels_map;

say "We have a label count of $label_count. These labels include: ",
    join ", ", List::Util::head( 5, @labels_map{ sort keys %labels_map } );

my @tags = ( 'serve' );

if( File::Which::which('saved_model_cli')) {
    local $ENV{TF_CPP_MIN_LOG_LEVEL} = 3; # quiet the TensorFlow logger for the following command
    system(qw(saved_model_cli show),
        qw(--dir)           => $model_base,
        qw(--tag_set)       => join(',', @tags),
        qw(--signature_def) => 'serving_default'
    ) == 0 or die "Could not run saved_model_cli";
} else {
    say "Install the tensorflow Python package to get the `saved_model_cli` command.";
}

my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;

my $graph = AI::TensorFlow::Libtensorflow::Graph->New;
my $session = AI::TensorFlow::Libtensorflow::Session->LoadFromSavedModel(
    $opt, undef, $model_base, \@tags, $graph, undef, $s
);
AssertOK($s);

my %ops = (
    in  => {
        op   =>  $graph->OperationByName('serving_default_input_tensor'),
        dict => {
            input_tensor => 0,
        }
    },
    out => {
        op => $graph->OperationByName('StatefulPartitionedCall'),
        dict => {
            detection_boxes   => 0,
            detection_classes => 1,
            detection_scores  => 2,
            num_detections    => 3,
        }
    },
);

my %outputs;

%outputs = map {
    my $put_type = $_;
    my $op = $ops{$put_type}{op};
    my $port_dict = $ops{$put_type}{dict};

   $put_type => +{
        map {
            my $dict_key = $_;
            my $index = $port_dict->{$_};
            $dict_key => AI::TensorFlow::Libtensorflow::Output->New( {
                oper => $op,
                index => $index,
            });
        } keys %$port_dict
     }
} keys %ops;

p %outputs;

use HTML::Tiny;

my %images_for_test_to_uri = (
    "beach_scene" => 'https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image2.jpg?raw=true',
);

my @image_names = sort keys %images_for_test_to_uri;
my $h = HTML::Tiny->new;

my $image_name = 'beach_scene';
if( IN_IPERL ) {
    IPerl->html(
        $h->a( { href => $images_for_test_to_uri{$image_name} },
            $h->img({
                src => $images_for_test_to_uri{$image_name},
                alt => $image_name,
                width => '100%',
            })
        ),
    );
}

sub load_image_to_pdl {
    my ($uri, $image_size) = @_;

    my $http = HTTP::Tiny->new;
    my $response = $http->get( $uri );
    die "Could not fetch image from $uri" unless $response->{success};
    say "Downloaded $uri";

    my $img = Imager->new;
    $img->read( data => $response->{content} );

    # Create PDL ndarray from Imager data in-memory.
    my $data;
    $img->write( data => \$data, type => 'raw' )
        or die "could not write ". $img->errstr;

    die "Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
        if $img->getchannels != 3;

    # $data is packed as PDL->dims == [w,h] with RGB pixels
    my $pdl_raw = zeros(byte, $img->getchannels, $img->getwidth, $img->getheight);
    ${ $pdl_raw->get_dataref } = $data;
    $pdl_raw->upd_data;

    $pdl_raw;
}

my @pdl_images = map {
    load_image_to_pdl(
        $images_for_test_to_uri{$_},
        $model_name_to_params{$model_name}{image_size}
    );
} ($image_names[0]);

my $pdl_image_batched = cat(@pdl_images);
my $t = Uint8PDLTOTFTensor($pdl_image_batched);

die "There should be 4 dimensions" unless $pdl_image_batched->ndims == 4;

die "With the final dimension of length 1" unless $pdl_image_batched->dim(3) == 1;

p $pdl_image_batched;
p $t;

my $RunSession = sub {
    my ($session, $t) = @_;
    my @outputs_t;

    my @keys = keys %{ $outputs{out} };
    my @values = $outputs{out}->@{ @keys };
    $session->Run(
        undef,
        [ values %{$outputs{in} } ], [$t],
        \@values, \@outputs_t,
        undef,
        undef,
        $s
    );
    AssertOK($s);

    return { mesh \@keys, \@outputs_t };
};

undef;

my $tftensor_output_by_name = $RunSession->($session, $t);

my %pdl_output_by_name = map {
    $_ => FloatTFTensorToPDL( $tftensor_output_by_name->{$_} )
} keys $tftensor_output_by_name->%*;

undef;

my $min_score_thresh = 0.30;

my $which_detect = which( $pdl_output_by_name{detection_scores} > $min_score_thresh );

my %subset;

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

p %subset;

use PDL::Graphics::Gnuplot;

my $plot_output_path = 'objects-detected.png';
my $gp = gpwin('pngcairo', font => ",12", output => $plot_output_path, aa => 2, size => [10] );

my @qual_cmap = ('#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6');

$gp->options(
    map {
        my $idx = $_;
        my $lc_rgb = $qual_cmap[ $subset{detection_classes}->slice("($idx)")->squeeze % @qual_cmap ];

        my $box_corners_yx_norm = $subset{detection_boxes}->slice([],$idx,[0,0,0]);
        $box_corners_yx_norm->reshape(2,2);

        my $box_corners_yx_img = $box_corners_yx_norm * $pdl_images[0]->shape->slice('-1:-2');

        my $from_xy = join ",", $box_corners_yx_img->slice('-1:0,(0)')->list;
        my $to_xy   = join ",", $box_corners_yx_img->slice('-1:0,(1)')->list;
        my $label_xy = join ",", $box_corners_yx_img->at(1,1), $box_corners_yx_img->at(0,1);

        (
            [ object => [ "rect" =>
                from => $from_xy, to => $to_xy,
                qq{front fs empty border lc rgb "$lc_rgb" lw 5} ], ],
            [ label => [
                sprintf("%s: %.1f",
                    $subset{detection_class_labels}[$idx],
                    100*$subset{detection_scores}->at($idx,0) ) =>
                at => $label_xy, 'left',
                offset => 'character 0,-0.25',
                qq{font ",12" boxed front tc rgb "#ffffff"} ], ],
        )
    } 0..$subset{detection_boxes}->dim(1)-1
);

$gp->plot(
    topcmds => q{set style textbox opaque fc "#505050f0" noborder},
    square => 1,
    yrange => [$pdl_images[0]->dim(2),0],
    with => 'image', $pdl_images[0],
);

$gp->close;

IPerl->png( bytestream => path($plot_output_path)->slurp_raw ) if IN_IPERL;

use Filesys::DiskUsage qw/du/;

my $total = du( { 'human-readable' => 1, dereference => 1 },
    $model_archive_path, $model_base );

say "Disk space usage: $total"; undef;

__END__

=pod

=encoding UTF-8

=head1 NAME

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

=back

If you are running the code, you may optionally install the L<C<tensorflow> Python package|https://www.tensorflow.org/install/pip> in order to access the C<saved_model_cli> command, but this is only used for informational purposes.

=head1 TUTORIAL

=head2 Load the library

First, we need to load the C<AI::TensorFlow::Libtensorflow> library and more helpers. We then create an C<AI::TensorFlow::Libtensorflow::Status> object and helper function to make sure that the calls to the C<libtensorflow> C library are working prop...

  use strict;
  use warnings;
  use utf8;
  use constant IN_IPERL => !! $ENV{PERL_IPERL_RUNNING};
  no if IN_IPERL, warnings => 'redefine'; # fewer messages when re-running cells
  
  use feature qw(say state postderef);
  use Syntax::Construct qw(each-array);
  
  use lib::projectroot qw(lib);
  
  BEGIN {
      if( IN_IPERL ) {
          $ENV{TF_CPP_MIN_LOG_LEVEL} = 3;
      }
      require AI::TensorFlow::Libtensorflow;
  }
  
  use URI ();
  use HTTP::Tiny ();
  use Path::Tiny qw(path);
  
  use File::Which ();
  
  use List::Util 1.56 qw(mesh);
  
  use Data::Printer ( output => 'stderr', return_value => 'void', filters => ['PDL'] );
  use Data::Printer::Filter::PDL ();
  use Text::Table::Tiny qw(generate_table);
  
  use Imager;
  
  my $s = AI::TensorFlow::Libtensorflow::Status->New;
  sub AssertOK {
      die "Status $_[0]: " . $_[0]->Message
          unless $_[0]->GetCode == AI::TensorFlow::Libtensorflow::Status::OK;
      return;
  }
  AssertOK($s);

And create helpers for converting between C<PDL> ndarrays and C<TFTensor> ndarrays.

  use PDL;
  use AI::TensorFlow::Libtensorflow::DataType qw(FLOAT UINT8);
  
  use FFI::Platypus::Memory qw(memcpy);
  use FFI::Platypus::Buffer qw(scalar_to_pointer);
  
  sub FloatPDLTOTFTensor {
      my ($p) = @_;
      return AI::TensorFlow::Libtensorflow::Tensor->New(
          FLOAT, [ reverse $p->dims ], $p->get_dataref, sub { undef $p }
      );
  }
  
  sub FloatTFTensorToPDL {
      my ($t) = @_;
  
      my $pdl = zeros(float,reverse( map $t->Dim($_), 0..$t->NumDims-1 ) );
  
      memcpy scalar_to_pointer( ${$pdl->get_dataref} ),
          scalar_to_pointer( ${$t->Data} ),
          $t->ByteSize;
      $pdl->upd_data;
  
      $pdl;
  }
  
  sub Uint8PDLTOTFTensor {
      my ($p) = @_;
      return AI::TensorFlow::Libtensorflow::Tensor->New(
          UINT8, [ reverse $p->dims ], $p->get_dataref, sub { undef $p }
      );
  }
  
  sub Uint8TFTensorToPDL {
      my ($t) = @_;
  
      my $pdl = zeros(byte,reverse( map $t->Dim($_), 0..$t->NumDims-1 ) );
  
      memcpy scalar_to_pointer( ${$pdl->get_dataref} ),
          scalar_to_pointer( ${$t->Data} ),
          $t->ByteSize;
      $pdl->upd_data;
  
      $pdl;
  }

=head2 Fetch the model and labels

We are going to use an L<object detection model|https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1> from TensorFlow Hub based on the CenterNet architecture. We download both the model and COCO 2017 labels.

  # image_size => [width, height] (but usually square images)
  my %model_name_to_params = (
      centernet_hourglass_512x512 => {
          handle => 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
          image_size => [ 512, 512 ],
      },
  );
  
  my $model_name = 'centernet_hourglass_512x512';
  
  say "Selected model: $model_name : $model_name_to_params{$model_name}{handle}";

We download the model to the current directory and then extract the model to a folder with the name given in C<$model_base>.

  my $model_uri = URI->new( $model_name_to_params{$model_name}{handle} );
  $model_uri->query_form( 'tf-hub-format' => 'compressed' );
  my $model_base = substr( $model_uri->path, 1 ) =~ s,/,_,gr;
  my $model_archive_path = "${model_base}.tar.gz";
  
  my $http = HTTP::Tiny->new;
  
  for my $download ( [ $model_uri  => $model_archive_path ],) {
      my ($uri, $path) = @$download;
      say "Downloading $uri to $path";
      next if -e $path;
      $http->mirror( $uri, $path );
  }
  
  use Archive::Extract;
  my $ae = Archive::Extract->new( archive => $model_archive_path );
  die "Could not extract archive" unless $ae->extract( to => $model_base );
  
  my $saved_model = path($model_base)->child('saved_model.pb');
  say "Saved model is in $saved_model" if -f $saved_model;

We need to download the COCO 2017 classification labels and parse out the mapping from the numeric index to the textual descriptions.

  # Get the labels
  my $response = $http->get('https://raw.githubusercontent.com/tensorflow/models/a4944a57ad2811e1f6a7a87589a9fc8a776e8d3c/object_detection/data/mscoco_label_map.pbtxt');
  
  my %labels_map = $response->{content} =~ m<
  (?:item \s+ \{  \s+
    \Qname:\E \s+ "[^"]+" \s+
    \Qid:\E   \s+ (\d+) \s+
    \Qdisplay_name:\E \s+ "([^"]+)" \s+
  })+
  >sgx;
  
  my $label_count = List::Util::max keys %labels_map;
  
  say "We have a label count of $label_count. These labels include: ",
      join ", ", List::Util::head( 5, @labels_map{ sort keys %labels_map } );

=head2 Load the model and session

We define the tag set C<[ 'serve' ]> which we will use to load the model.

  my @tags = ( 'serve' );

We can examine what computations are contained in the graph in terms of the names of the inputs and outputs of an operation found in the graph by running C<saved_model_cli>.

  if( File::Which::which('saved_model_cli')) {
      local $ENV{TF_CPP_MIN_LOG_LEVEL} = 3; # quiet the TensorFlow logger for the following command
      system(qw(saved_model_cli show),
          qw(--dir)           => $model_base,
          qw(--tag_set)       => join(',', @tags),
          qw(--signature_def) => 'serving_default'
      ) == 0 or die "Could not run saved_model_cli";
  } else {
      say "Install the tensorflow Python package to get the `saved_model_cli` command.";
  }

The above C<saved_model_cli> output shows that the model input is at C<serving_default_input_tensor:0> which means the operation named C<serving_default_input_tensor> at index C<0> and there are multiple outputs with different shapes.

Per the L<model description|https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1> on TensorFlow Hub:

=over 2

B<Inputs>

A three-channel image of variable size - the model does NOT support batching. The input tensor is a C<tf.uint8> tensor with shape [1, height, width, 3] with values in [0, 255].

lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod  view on Meta::CPAN

C<detection_scores>: a C<tf.float32> tensor of shape [N] containing detection scores.

=back

=back

Note that the above documentation has two errors: both C<num_detections> and C<detection_classes> are not of type C<tf.int>, but are actually C<tf.float32>.

Now we can load the model from that folder with the tag set C<[ 'serve' ]> by using the C<LoadFromSavedModel> constructor to create a C<::Graph> and a C<::Session> for that graph.

  my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;
  
  my $graph = AI::TensorFlow::Libtensorflow::Graph->New;
  my $session = AI::TensorFlow::Libtensorflow::Session->LoadFromSavedModel(
      $opt, undef, $model_base, \@tags, $graph, undef, $s
  );
  AssertOK($s);

So let's use the names from the C<saved_model_cli> output to create our C<::Output> C<ArrayRef>s.

  my %ops = (
      in  => {
          op   =>  $graph->OperationByName('serving_default_input_tensor'),
          dict => {
              input_tensor => 0,
          }
      },
      out => {
          op => $graph->OperationByName('StatefulPartitionedCall'),
          dict => {
              detection_boxes   => 0,
              detection_classes => 1,
              detection_scores  => 2,
              num_detections    => 3,
          }
      },
  );
  
  my %outputs;
  
  %outputs = map {
      my $put_type = $_;
      my $op = $ops{$put_type}{op};
      my $port_dict = $ops{$put_type}{dict};
  
     $put_type => +{
          map {
              my $dict_key = $_;
              my $index = $port_dict->{$_};
              $dict_key => AI::TensorFlow::Libtensorflow::Output->New( {
                  oper => $op,
                  index => $index,
              });
          } keys %$port_dict
       }
  } keys %ops;
  
  p %outputs;

Now we can get the following testing image from GitHub.

  use HTML::Tiny;
  
  my %images_for_test_to_uri = (
      "beach_scene" => 'https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image2.jpg?raw=true',
  );
  
  my @image_names = sort keys %images_for_test_to_uri;
  my $h = HTML::Tiny->new;
  
  my $image_name = 'beach_scene';
  if( IN_IPERL ) {
      IPerl->html(
          $h->a( { href => $images_for_test_to_uri{$image_name} },
              $h->img({
                  src => $images_for_test_to_uri{$image_name},
                  alt => $image_name,
                  width => '100%',
              })
          ),
      );
  }

=head2 Download the test image and transform it into suitable input data

We now fetch the image and prepare it to be in the needed format by using C<Imager>. Note that this model does not need the input image to be of a certain size so no resizing or padding is required.

Then we turn the C<Imager> data into a C<PDL> ndarray. Since we just need the 3 channels of the image as they are, they can be stored directly in a C<PDL> ndarray of type C<byte>.

The reason why we need to concatenate the C<PDL> ndarrays here despite the model only taking a single image at a time is to get an ndarray with four (4) dimensions with the last C<PDL> dimension of size one (1).

  sub load_image_to_pdl {
      my ($uri, $image_size) = @_;
  
      my $http = HTTP::Tiny->new;
      my $response = $http->get( $uri );
      die "Could not fetch image from $uri" unless $response->{success};
      say "Downloaded $uri";
  
      my $img = Imager->new;
      $img->read( data => $response->{content} );
  
      # Create PDL ndarray from Imager data in-memory.
      my $data;
      $img->write( data => \$data, type => 'raw' )
          or die "could not write ". $img->errstr;
  
      die "Image does not have 3 channels, it has @{[ $img->getchannels ]} channels"
          if $img->getchannels != 3;
  
      # $data is packed as PDL->dims == [w,h] with RGB pixels
      my $pdl_raw = zeros(byte, $img->getchannels, $img->getwidth, $img->getheight);
      ${ $pdl_raw->get_dataref } = $data;
      $pdl_raw->upd_data;
  
      $pdl_raw;
  }
  
  my @pdl_images = map {
      load_image_to_pdl(
          $images_for_test_to_uri{$_},
          $model_name_to_params{$model_name}{image_size}
      );
  } ($image_names[0]);
  
  my $pdl_image_batched = cat(@pdl_images);
  my $t = Uint8PDLTOTFTensor($pdl_image_batched);
  
  die "There should be 4 dimensions" unless $pdl_image_batched->ndims == 4;
  
  die "With the final dimension of length 1" unless $pdl_image_batched->dim(3) == 1;
  
  p $pdl_image_batched;
  p $t;

=head2 Run the model for inference

We can use the C<Run> method to run the session and get the multiple output C<TFTensor>s. The following uses the names in C<$outputs> mapping to help process the multiple outputs more easily.

  my $RunSession = sub {
      my ($session, $t) = @_;
      my @outputs_t;
  
      my @keys = keys %{ $outputs{out} };
      my @values = $outputs{out}->@{ @keys };
      $session->Run(
          undef,
          [ values %{$outputs{in} } ], [$t],
          \@values, \@outputs_t,
          undef,
          undef,
          $s
      );
      AssertOK($s);
  
      return { mesh \@keys, \@outputs_t };
  };
  
  undef;



  my $tftensor_output_by_name = $RunSession->($session, $t);
  
  my %pdl_output_by_name = map {
      $_ => FloatTFTensorToPDL( $tftensor_output_by_name->{$_} )
  } keys $tftensor_output_by_name->%*;
  
  undef;

=head2 Results summary

Then we use a score threshold to select the objects of interest.

  my $min_score_thresh = 0.30;
  
  my $which_detect = which( $pdl_output_by_name{detection_scores} > $min_score_thresh );
  
  my %subset;
  
  $subset{detection_boxes}   = $pdl_output_by_name{detection_boxes}->dice('X', $which_detect);
  $subset{detection_classes} = $pdl_output_by_name{detection_classes}->dice($which_detect);
  $subset{detection_scores}  = $pdl_output_by_name{detection_scores}->dice($which_detect);
  
  $subset{detection_class_labels}->@* = map { $labels_map{$_} } $subset{detection_classes}->list;
  
  p %subset;

The following uses the bounding boxes and class label information to draw boxes and labels on top of the image using Gnuplot.

  use PDL::Graphics::Gnuplot;
  
  my $plot_output_path = 'objects-detected.png';
  my $gp = gpwin('pngcairo', font => ",12", output => $plot_output_path, aa => 2, size => [10] );
  
  my @qual_cmap = ('#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6');
  
  $gp->options(
      map {
          my $idx = $_;
          my $lc_rgb = $qual_cmap[ $subset{detection_classes}->slice("($idx)")->squeeze % @qual_cmap ];
  
          my $box_corners_yx_norm = $subset{detection_boxes}->slice([],$idx,[0,0,0]);
          $box_corners_yx_norm->reshape(2,2);
  
          my $box_corners_yx_img = $box_corners_yx_norm * $pdl_images[0]->shape->slice('-1:-2');
  
          my $from_xy = join ",", $box_corners_yx_img->slice('-1:0,(0)')->list;
          my $to_xy   = join ",", $box_corners_yx_img->slice('-1:0,(1)')->list;
          my $label_xy = join ",", $box_corners_yx_img->at(1,1), $box_corners_yx_img->at(0,1);
  
          (
              [ object => [ "rect" =>
                  from => $from_xy, to => $to_xy,
                  qq{front fs empty border lc rgb "$lc_rgb" lw 5} ], ],
              [ label => [
                  sprintf("%s: %.1f",
                      $subset{detection_class_labels}[$idx],
                      100*$subset{detection_scores}->at($idx,0) ) =>
                  at => $label_xy, 'left',
                  offset => 'character 0,-0.25',
                  qq{font ",12" boxed front tc rgb "#ffffff"} ], ],
          )
      } 0..$subset{detection_boxes}->dim(1)-1
  );
  
  $gp->plot(
      topcmds => q{set style textbox opaque fc "#505050f0" noborder},
      square => 1,
      yrange => [$pdl_images[0]->dim(2),0],
      with => 'image', $pdl_images[0],
  );
  
  $gp->close;
  
  IPerl->png( bytestream => path($plot_output_path)->slurp_raw ) if IN_IPERL;

=head1 RESOURCE USAGE

  use Filesys::DiskUsage qw/du/;
  
  my $total = du( { 'human-readable' => 1, dereference => 1 },
      $model_archive_path, $model_base );
  
  say "Disk space usage: $total"; undef;

=head1 CPANFILE

  requires 'AI::TensorFlow::Libtensorflow';
  requires 'AI::TensorFlow::Libtensorflow::DataType';
  requires 'Archive::Extract';
  requires 'Data::Printer';
  requires 'Data::Printer::Filter::PDL';
  requires 'FFI::Platypus::Buffer';
  requires 'FFI::Platypus::Memory';
  requires 'File::Which';
  requires 'Filesys::DiskUsage';
  requires 'HTML::Tiny';
  requires 'HTTP::Tiny';
  requires 'Imager';
  requires 'List::Util', '1.56';
  requires 'PDL';
  requires 'PDL::Graphics::Gnuplot';
  requires 'Path::Tiny';
  requires 'Syntax::Construct';
  requires 'Text::Table::Tiny';
  requires 'URI';
  requires 'constant';
  requires 'feature';
  requires 'lib::projectroot';
  requires 'strict';
  requires 'utf8';
  requires 'warnings';

=head1 AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

=head1 COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004

=cut



( run in 0.376 second using v1.01-cache-2.11-cpan-4d50c553e7e )