AI-TensorFlow-Libtensorflow
view release on metacpan or search on metacpan
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
b) ConfigProto.gpu_options.allow_growth is set to `gpu_memory_allow_growth`.
c) ConfigProto.device_count is set to `num_cpu_devices`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern TF_Buffer* TF_CreateConfig(
unsigned char enable_xla_compilation, unsigned char gpu_memory_allow_growth,
unsigned int num_cpu_devices);
=head2 TF_CreateRunOptions
=over 2
Create a serialized tensorflow.RunOptions proto, where RunOptions.trace_level
is set to FULL_TRACE if `enable_full_trace` is non-zero, and NO_TRACE
otherwise.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern TF_Buffer* TF_CreateRunOptions(
unsigned char enable_full_trace);
=head2 TF_GraphDebugString
=over 2
Returns the graph content in a human-readable format, with length set in
`len`. The format is subject to change in the future.
The returned string is heap-allocated, and caller should call free() on it.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern const char* TF_GraphDebugString(TF_Graph* graph,
size_t* len);
=head2 TF_FunctionDebugString
=over 2
Returns the function content in a human-readable format, with length set in
`len`. The format is subject to change in the future.
The returned string is heap-allocated, and caller should call free() on it.
Do not return const char*, because some foreign language binding
(e.g. swift) cannot then call free() on the returned pointer.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern char* TF_FunctionDebugString(TF_Function* func,
size_t* len);
=head2 TF_DequeueNamedTensor
=over 2
Caller must call TF_DeleteTensor() over the returned tensor. If the queue is
empty, this call is blocked.
Tensors are enqueued via the corresponding TF enqueue op.
TODO(hongm): Add support for `timeout_ms`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern TF_Tensor* TF_DequeueNamedTensor(TF_Session* session,
int tensor_id,
TF_Status* status);
=head2 TF_EnqueueNamedTensor
=over 2
On success, enqueues `tensor` into a TF-managed FifoQueue given by
`tensor_id`, associated with `session`. There must be a graph node named
"fifo_queue_enqueue_<tensor_id>", to be executed by this API call. It reads
from a placeholder node "arg_tensor_enqueue_<tensor_id>".
`tensor` is still owned by the caller. This call will be blocked if the queue
has reached its capacity, and will be unblocked when the queued tensors again
drop below the capacity due to dequeuing.
Tensors are dequeued via the corresponding TF dequeue op.
TODO(hongm): Add support for `timeout_ms`.
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_EnqueueNamedTensor(TF_Session* session,
int tensor_id,
TF_Tensor* tensor,
TF_Status* status);
=head2 TF_MakeInternalErrorStatus
=over 2
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_MakeInternalErrorStatus(TF_Status* status,
const char* errMsg);
=head2 TF_NewCheckpointReader
=over 2
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern TF_CheckpointReader* TF_NewCheckpointReader(
const char* filename, TF_Status* status);
=head2 TF_DeleteCheckpointReader
=over 2
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TF_DeleteCheckpointReader(
TF_CheckpointReader* reader);
=head2 TF_CheckpointReaderHasTensor
=over 2
=back
/* From <tensorflow/c/c_api_experimental.h> */
TF_CAPI_EXPORT extern int TF_CheckpointReaderHasTensor(
TF_CheckpointReader* reader, const char* name);
=head2 TF_CheckpointReaderGetVariable
=over 2
Get the variable name at the given index
=back
( run in 0.724 second using v1.01-cache-2.11-cpan-501a3233654 )