AI-TensorFlow-Libtensorflow
view release on metacpan
or search on metacpan
CONTRIBUTING
view on Meta::CPAN
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | <<<=== COPYRIGHT CONTRIBUTIONS ===>>>
[ BEGIN, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
By contributing to this repository, you agree that any and all such Contributions and derivative works thereof shall immediately become part of the APTech Family of software and documentation, and you accept and agree to the following legally-binding...
1. Definitions.
"You" or "Your" shall mean the copyright owner, or legal entity authorized by the copyright owner, that is making this Agreement. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are und...
"APTech" is defined as the Delaware corporation named Auto-Parallel Technologies, Inc. with a primary place of business in Cedar Park, Texas, USA.
The "APTech Family of software and documentation" (hereinafter the "APTech Family" ) is defined as all copyrightable works identified as "part of the APTech Family" immediately following their copyright notice, and includes but is not limited to this ...
"Team APTech" is defined as all duly-authorized contributors to the APTech Family, including You after making Your first Contribution to the APTech Family under the terms of this Agreement.
"Team APTech Leadership" is defined as all duly-authorized administrators and official representatives of the APTech Family, as listed publicly on the most up-to-date copy of the AutoParallel.com website.
"Contribution" shall mean any original work of authorship, including any changes or additions or enhancements to an existing work, that is intentionally submitted by You to this repository for inclusion in, or documentation of, any of the products or...
2. Assignment of Copyright. Subject to the terms and conditions of this Agreement, and for good and valuable consideration, receipt of which You acknowledge, You hereby transfer to the Delaware corporation named Auto-Parallel Technologies, Inc. with ...
You hereby agree that if You have or acquire hereafter any patent or interface copyright or other intellectual property interest dominating the software or documentation contributed to by the Work (or use of that software or documentation), such domi... You hereby represent and warrant that You are the sole copyright holder for the Work and that You have the right and power to enter into this legally-binding contractual agreement. You hereby indemnify and hold harmless APTech, its heirs, assignees,...
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to APTech and to recipients of software distributed by APTech a perpetual, worldwide, non-exclusive, no -charge, royalty-free, irrevocable (except as ...
4. You represent that you are legally entitled to assign the above copyright and grant the above patent license. If your employer(s) or contractee(s) have rights to intellectual property that you create that includes your Contributions, then you rep...
5. You represent that each of Your Contributions is Your original creation and is not subject to any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which ...
6. You agree to submit written notification to Team APTech's Leadership of any facts or circumstances of which you become aware that would make the representations of this Agreement inaccurate in any respect.
[ END, APTECH FAMILY COPYRIGHT ASSIGNMENT AGREEMENT ]
<<<=== LEGAL OVERVIEW ===>>>
All APTech Family software and documentation is legally copyrighted by Auto-Parallel Technologies, Inc.
To maintain the legal integrity and defensibility of the APTech Family of software and documentation, all contributors to the APTech Family must assign copyright ownership to Auto-Parallel Technologies, Inc. under the terms of the APTech Family Copyr...
This is the same strategy used by the Free Software Foundation for many GNU software projects, as explained below:
Why The FSF Gets Copyright Assignments From Contributors
By Professor Eben Moglen, Columbia University Law School
Copyright © 2001, 2008, 2009, 2014 Free Software Foundation, Inc.
The quoted text below is not modified, and is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.
"Under US copyright law, which is the law under which most free software programs have historically been first published, there are very substantial procedural advantages to registration of copyright. And despite the broad right of distribution conv...
In order to make sure that all of our copyrights can meet the recordkeeping and other requirements of registration, and in order to be able to enforce the GPL most effectively, FSF requires that each author of code incorporated in FSF projects provid...
<<<=== COMMITMENT TO FREE & OPEN SOURCE SOFTWARE ===>>>
Auto-Parallel Technologies, Inc. is committed to maintaining the free-and- open -source software (FOSS) basis of the APTech Family.
If your APTech Family contribution is accepted and merged into an official APTech Family source repository, then your contribution is automatically published online with FOSS licensing, currently the Apache License Version 2.0.
<<<=== EMPLOYER COPYRIGHT DISCLAIMER AGREEMENT ===>>>
The file named EMPLOYERS.pdf contains the Employer Copyright Disclaimer Agreement. If you are employed or work as an independent contractor, and either your job involves computer programming or you have executed an agreement giving your employer or ...
<<<=== OTHER CONTRIBUTORS ===>>>
|
CONTRIBUTING
view on Meta::CPAN
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | 6. E-Mail Address
7. Names of APTech Family Files Modified (or "none" )
8. Names of APTech Family Files Created (or "none" )
9. Current Employer(s) or Contractee(s) (or "none" )
10. Does Your Job Involve Computer Programming? (or "not applicable" )
11. Does Your Job Involve an IP Ownership Agreement? (or "not applicable" )
12. Name(s) & Employer(s) of Additional Contributors (or "none" )
Snail Mail Address:
Auto-Parallel Technologies, Inc.
[ CONTACT VIA E-MAIL BELOW FOR STREET ADDRESS ]
Cedar Park, TX, USA, 78613
E-Mail Address (Remove "NOSPAM." Before Sending):
william.braswell at NOSPAM.autoparallel.com
THANKS FOR CONTRIBUTING! :-)
|
COPYRIGHT
view on Meta::CPAN
1 2 3 4 5 6 7 8 9 10 11 | AI::TensorFlow::Libtensorflow is Copyright © 2022 Auto-Parallel Technologies, Inc.
All rights reserved.
AI::TensorFlow::Libtensorflow is part of the APTech Family of software and documentation.
This program is free software; you can redistribute it and/or modify
it under the terms of the Apache License Version 2.0.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
LICENSE
view on Meta::CPAN
1 2 3 4 5 6 7 8 9 10 11 | This software is Copyright (c) 2022 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
The Apache License, Version 2.0, January 2004
Apache License
Version 2.0, January 2004
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
LICENSE
view on Meta::CPAN
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 | To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format . We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 Auto-Parallel Technologies, Inc
Licensed under the Apache License, Version 2.0 (the "License" );
you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
README
view on Meta::CPAN
1 2 3 4 5 6 7 8 9 10 11 12 13 | This archive contains the distribution AI-TensorFlow-Libtensorflow,
version 0.0.7:
Bindings for Libtensorflow deep learning library
This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.
This is free software, licensed under:
The Apache License, Version 2.0, January 2004
This README file was generated by Dist::Zilla::Plugin::Readme v6.030.
|
dist.ini
view on Meta::CPAN
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | name = AI-TensorFlow-Libtensorflow
version = 0.0.7
author = Zakariyya Mughal <zmughal @cpan .org>
; This is licensed under the same license as TensorFlow which is
license = Apache_2_0
copyright_holder = Auto-Parallel Technologies, Inc.
copyright_year = 2022-2023
[lib]
lib = maint/inc
; in maint/inc/
[=maint::inc::PreloadPodWeaver]
;; For ::Lib
; authordep Alien::Libtensorflow
; authordep FFI::Platypus
|
lib/AI/TensorFlow/Libtensorflow.pm
view on Meta::CPAN
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | |
lib/AI/TensorFlow/Libtensorflow.pm
view on Meta::CPAN
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 | |
lib/AI/TensorFlow/Libtensorflow/ApiDefMap.pm
view on Meta::CPAN
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | |
lib/AI/TensorFlow/Libtensorflow/Buffer.pm
view on Meta::CPAN
171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | |
lib/AI/TensorFlow/Libtensorflow/DataType.pm
view on Meta::CPAN
264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | is "@{[ DOUBLE ]}" , 'DOUBLE' , 'Stringifies' ;
|
lib/AI/TensorFlow/Libtensorflow/DeviceList.pm
view on Meta::CPAN
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | |
lib/AI/TensorFlow/Libtensorflow/Eager/Context.pm
view on Meta::CPAN
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | B<C API>: L<< C<TFE_DeleteContext>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TFE_DeleteContext >>
1;
|
lib/AI/TensorFlow/Libtensorflow/Eager/ContextOptions.pm
view on Meta::CPAN
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | |
lib/AI/TensorFlow/Libtensorflow/Graph.pm
view on Meta::CPAN
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 | |
lib/AI/TensorFlow/Libtensorflow/ImportGraphDefOptions.pm
view on Meta::CPAN
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | |
lib/AI/TensorFlow/Libtensorflow/ImportGraphDefResults.pm
view on Meta::CPAN
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
lib/AI/TensorFlow/Libtensorflow/Input.pm
view on Meta::CPAN
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | |
lib/AI/TensorFlow/Libtensorflow/Lib.pm
view on Meta::CPAN
373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrPtrLenSizeArrayRefScalar.pm
view on Meta::CPAN
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalar.pm
view on Meta::CPAN
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/TFPtrSizeScalarRef.pm
view on Meta::CPAN
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableArrayRef.pm
view on Meta::CPAN
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/PackableMaybeArrayRef.pm
view on Meta::CPAN
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | |
lib/AI/TensorFlow/Libtensorflow/Lib/FFIType/Variant/RecordArrayRef.pm
view on Meta::CPAN
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | |
lib/AI/TensorFlow/Libtensorflow/Lib/Types.pm
view on Meta::CPAN
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | |
lib/AI/TensorFlow/Libtensorflow/Lib/_Alloc.pm
view on Meta::CPAN
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | |
lib/AI/TensorFlow/Libtensorflow/Manual.pod
view on Meta::CPAN
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | the documentation of individual methods.
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 | TF_CAPI_EXPORT extern void TF_DeleteServer(TF_Server* server);
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 | TF_CAPI_EXPORT extern void* TF_GetSymbolFromLibrary(void* handle,
const char* symbol_name,
TF_Status* status);
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 | TF_CAPI_EXPORT extern void TFE_OpAttrsSerialize(const TFE_OpAttrs* attrs,
TF_Buffer* buf,
TF_Status* status);
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
7154 7155 7156 7157 7158 7159 7160 7161 7162 7163 7164 7165 7166 7167 7168 7169 7170 7171 7172 7173 7174 | /* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT void TFE_ContextSetSoftDevicePlacement(TFE_Context* ctx,
unsigned char enable,
TF_Status* status);
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
7247 7248 7249 7250 7251 7252 7253 7254 7255 7256 7257 7258 7259 7260 7261 7262 7263 7264 7265 7266 7267 | /* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_GetExecutedOpNames(TFE_Context* ctx,
TF_Buffer* buf,
TF_Status* status);
|
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod
view on Meta::CPAN
8547 8548 8549 8550 8551 8552 8553 8554 8555 8556 8557 8558 8559 8560 8561 8562 8563 | |
lib/AI/TensorFlow/Libtensorflow/Manual/GPU.pod
view on Meta::CPAN
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | $ENV {CUDA_VISIBLE_DEVICES} = '0' ;
}
|
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod
view on Meta::CPAN
134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | >sgx;
my $label_count = List::Util::max keys %labels_map ;
say "We have a label count of $label_count. These labels include: " ,
join ", " , List::Util::head( 5, @labels_map { sort keys %labels_map } );
my @tags = ( 'serve' );
if ( File::Which::which( 'saved_model_cli' )) {
local $ENV {TF_CPP_MIN_LOG_LEVEL} = 3;
system ( qw(saved_model_cli show) ,
qw(--dir) => $model_base ,
qw(--tag_set) => join ( ',' , @tags ),
qw(--signature_def) => 'serving_default'
) == 0 or die "Could not run saved_model_cli" ;
} else {
say "Install the tensorflow Python package to get the `saved_model_cli` command." ;
}
my $opt = AI::TensorFlow::Libtensorflow::SessionOptions->New;
|
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod
view on Meta::CPAN
544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 | |
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubCenterNetObjDetect.pod
view on Meta::CPAN
864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 | requires 'strict' ;
requires 'utf8' ;
requires 'warnings' ;
|
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod
view on Meta::CPAN
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | $t ->mark( 'END' );
$t ->report();
my $predictions_p = FloatTFTensorToPDL( $predictions )->slice( ',,(0)' );
say $predictions_p ->info; undef ;
my @tracks = (
[ 'DNASE:CD14-positive monocyte female' => 41 => $predictions_p ->slice( '(41)' ) ],
[ 'DNASE:keratinocyte female' => 42 => $predictions_p ->slice( '(42)' ) ],
[ 'CHIP:H3K27ac:keratinocyte female' => 706 => $predictions_p ->slice( '(706)' )],
[ 'CAGE:Keratinocyte - epidermal' => 4799 => log10(1 + $predictions_p ->slice( '(4799)' )) ],
);
my $plot_output_path = 'enformer-target-interval-tracks.png' ;
my $gp = gpwin( 'pngcairo' , font => ",10" , output => $plot_output_path , size => [10,2. * @tracks ], aa => 2 );
$gp ->multiplot( layout => [1, scalar @tracks ], title => $target_interval );
$gp ->options(
|
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod
view on Meta::CPAN
560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 | |
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod
view on Meta::CPAN
1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 | |
lib/AI/TensorFlow/Libtensorflow/Manual/Notebook/InferenceUsingTFHubEnformerGeneExprPredModel.pod
view on Meta::CPAN
1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 | requires 'strict' ;
requires 'utf8' ;
requires 'warnings' ;
|