AI-TensorFlow-Libtensorflow
view release on metacpan or search on metacpan
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
(inputs[0,ninputs-1] with corresponding values in input_values[0,ninputs-1]).
Any NULL and non-NULL value combinations for (`run_options`,
`run_metadata`) are valid.
- `run_options` may be NULL, in which case it will be ignored; or
non-NULL, in which case it must point to a `TF_Buffer` containing the
serialized representation of a `RunOptions` protocol buffer.
- `run_metadata` may be NULL, in which case it will be ignored; or
non-NULL, in which case it must point to an empty, freshly allocated
`TF_Buffer` that may be updated to contain the serialized representation
of a `RunMetadata` protocol buffer.
The caller retains ownership of `input_values` (which can be deleted using
TF_DeleteTensor). The caller also retains ownership of `run_options` and/or
`run_metadata` (when not NULL) and should manually call TF_DeleteBuffer on
them.
On success, the tensors corresponding to outputs[0,noutputs-1] are placed in
output_values[]. Ownership of the elements of output_values[] is transferred
to the caller, which must eventually call TF_DeleteTensor on them.
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
and deleting entries as they are encountered.
If dirname itself is not readable or does not exist, *undeleted_dir_count is
set to 1, *undeleted_file_count is set to 0 and an appropriate status (e.g.
TF_NOT_FOUND) is returned.
If dirname and all its descendants were successfully deleted, TF_OK is
returned and both error counters are set to zero.
Otherwise, while traversing the tree, undeleted_file_count and
undeleted_dir_count are updated if an entry of the corresponding type could
not be deleted. The returned error status represents the reason that any one
of these entries could not be deleted.
Typical status codes:
* TF_OK - dirname exists and we were able to delete everything underneath
* TF_NOT_FOUND - dirname doesn't exist
* TF_PERMISSION_DENIED - dirname or some descendant is not writable
* TF_UNIMPLEMENTED - some underlying functions (like Delete) are not
implemented
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
TF_CAPI_EXPORT extern void TF_DeleteRecursively(const char* dirname,
int64_t* undeleted_file_count,
int64_t* undeleted_dir_count,
TF_Status* status);
=head2 TF_FileStat
=over 2
Obtains statistics for the given path. If status is TF_OK, *stats is
updated, otherwise it is not touched.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_FileStat(const char* filename,
TF_FileStatistics* stats,
TF_Status* status);
=head2 TF_NewWritableFile
=over 2
Creates or truncates the given filename and returns a handle to be used for
appending data to the file. If status is TF_OK, *handle is updated and the
caller is responsible for freeing it (see TF_CloseWritableFile).
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_NewWritableFile(const char* filename,
TF_WritableFileHandle** handle,
TF_Status* status);
=head2 TF_CloseWritableFile
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
TF_OpKernelContext* ctx, bool do_lock, bool sparse, const int* const inputs,
size_t len,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
TF_Tensor* dest),
TF_VariableInputLockHolder** lockHolder, TF_Status* status);
=head2 TF_GetInputTensorFromVariable
=over 2
This interface returns `out` tensor which is updated corresponding to the
variable passed with input index. The caller takes ownership of the `source`
and `dest` tensors and is responsible for freeing them with TF_DeleteTensor.
=back
/* From <tensorflow/c/kernels_experimental.h> */
TF_CAPI_EXPORT extern void TF_GetInputTensorFromVariable(
TF_OpKernelContext* ctx, int input, bool lock_held, bool isVariantType,
bool sparse,
void (*copyFunc)(TF_OpKernelContext* ctx, TF_Tensor* source,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
TF_CAPI_EXPORT extern TFE_Executor* TFE_ContextGetExecutorForThread(
TFE_Context*);
=head2 TFE_ContextUpdateServerDef
=over 2
Update an existing context with a new set of servers defined in a ServerDef
proto. Servers can be added to and removed from the list of remote workers
in the context. A New set of servers identified by the ServerDef must be up
when the context is updated.
This API is for experimental usage and may be subject to change.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_ContextUpdateServerDef(TFE_Context* ctx,
int keep_alive_secs,
const void* proto,
size_t proto_len,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_AttrBuilderCheckCanRunOnDevice(
TF_AttrBuilder* builder, const char* device_type, TF_Status* status);
=head2 TF_GetNumberAttrForOpListInput
=over 2
For argument number input_index, fetch the corresponding number_attr that
needs to be updated with the argument length of the input list.
Returns nullptr if there is any problem like op_name is not found, or the
argument does not support this attribute type.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern const char* TF_GetNumberAttrForOpListInput(
const char* op_name, int input_index, TF_Status* status);
=head2 TF_OpIsStateful
lib/AI/TensorFlow/Libtensorflow/Session.pm view on Meta::CPAN
=item ArrayRef[TFTensor] $output_values
Reference to where the output values for C<$outputs> will be placed.
=item ArrayRef[TFOperation] $target_opers
TODO
=item Maybe[TFBuffer] $run_metadata
Optional empty C<TFBuffer> which will be updated to contain a serialized
representation of a `RunMetadata` protocol buffer.
=item L<TFStatus|AI::TensorFlow::Libtensorflow::Lib::Types/TFStatus> $status
Status
=back
B<C API>: L<< C<TF_SessionRun>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_SessionRun >>
( run in 0.223 second using v1.01-cache-2.11-cpan-05444aca049 )