AI-TensorFlow-Libtensorflow
view release on metacpan or search on metacpan
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
=over 2
Creates a temporary file name with an extension.
The caller is responsible for freeing the returned pointer.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern char* TF_GetTempFileName(const char* extension);
=head2 TF_NowNanos
=over 2
Returns the number of nanoseconds since the Unix epoch.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern uint64_t TF_NowNanos(void);
=head2 TF_NowMicros
=over 2
Returns the number of microseconds since the Unix epoch.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern uint64_t TF_NowMicros(void);
=head2 TF_NowSeconds
=over 2
Returns the number of seconds since the Unix epoch.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern uint64_t TF_NowSeconds(void);
=head2 TF_DefaultThreadOptions
=over 2
Populates a TF_ThreadOptions struct with system-default values.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_DefaultThreadOptions(TF_ThreadOptions* options);
=head2 TF_StartThread
=over 2
Returns a new thread that is running work_func and is identified
(for debugging/performance-analysis) by thread_name.
The given param (which may be null) is passed to work_func when the thread
starts. In this way, data may be passed from the thread back to the caller.
Caller takes ownership of the result and must call TF_JoinThread on it
eventually.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern TF_Thread* TF_StartThread(const TF_ThreadOptions* options,
const char* thread_name,
void (*work_func)(void*),
void* param);
=head2 TF_JoinThread
=over 2
Waits for the given thread to finish execution, then deletes it.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void TF_JoinThread(TF_Thread* thread);
=head2 TF_LoadSharedLibrary
=over 2
\brief Load a dynamic library.
Pass "library_filename" to a platform-specific mechanism for dynamically
loading a library. The rules for determining the exact location of the
library are platform-specific and are not documented here.
On success, place OK in status and return the newly created library handle.
Otherwise returns nullptr and set error status.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void* TF_LoadSharedLibrary(const char* library_filename,
TF_Status* status);
=head2 TF_GetSymbolFromLibrary
=over 2
\brief Get a pointer to a symbol from a dynamic library.
"handle" should be a pointer returned from a previous call to
TF_LoadLibraryFromEnv. On success, place OK in status and return a pointer to
the located symbol. Otherwise returns nullptr and set error status.
=back
/* From <tensorflow/c/env.h> */
TF_CAPI_EXPORT extern void* TF_GetSymbolFromLibrary(void* handle,
const char* symbol_name,
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_TensorHandleResolve
=over 2
This function will block till the operation that produces `h` has
completed. The memory returned might alias the internal memory used by
TensorFlow. Hence, callers should not mutate this memory (for example by
modifying the memory region pointed to by TF_TensorData() on the returned
TF_Tensor).
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TF_Tensor* TFE_TensorHandleResolve(TFE_TensorHandle* h,
TF_Status* status);
=head2 TFE_TensorHandleCopyToDevice
=over 2
Create a new TFE_TensorHandle with the same contents as 'h' but placed
in the memory of the device name 'device_name'.
If source and destination are the same device, then this creates a new handle
that shares the underlying buffer. Otherwise, it currently requires at least
one of the source or destination devices to be CPU (i.e., for the source or
destination tensor to be placed in host memory).
If async execution is enabled, the copy may be enqueued and the call will
return "non-ready" handle. Else, this function returns after the copy has
been done.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_TensorHandle* TFE_TensorHandleCopyToDevice(
TFE_TensorHandle* h, TFE_Context* ctx, const char* device_name,
TF_Status* status);
=head2 TFE_TensorHandleTensorDebugInfo
=over 2
Retrieves TFE_TensorDebugInfo for `handle`.
If TFE_TensorHandleTensorDebugInfo succeeds, `status` is set to OK and caller
is responsible for deleting returned TFE_TensorDebugInfo.
If TFE_TensorHandleTensorDebugInfo fails, `status` is set to appropriate
error and nullptr is returned. This function can block till the operation
that produces `handle` has completed.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_TensorDebugInfo* TFE_TensorHandleTensorDebugInfo(
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_DeleteTensorDebugInfo
=over 2
Deletes `debug_info`.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_DeleteTensorDebugInfo(
TFE_TensorDebugInfo* debug_info);
=head2 TFE_TensorDebugInfoOnDeviceNumDims
=over 2
Returns the number of dimensions used to represent the tensor on its device.
The number of dimensions used to represent the tensor on device can be
different from the number returned by TFE_TensorHandleNumDims.
The return value was current at the time of TFE_TensorDebugInfo creation.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern int TFE_TensorDebugInfoOnDeviceNumDims(
TFE_TensorDebugInfo* debug_info);
=head2 TFE_TensorDebugInfoOnDeviceDim
=over 2
Returns the number of elements in dimension `dim_index`.
Tensor representation on device can be transposed from its representation
on host. The data contained in dimension `dim_index` on device
can correspond to the data contained in another dimension in on-host
representation. The dimensions are indexed using the standard TensorFlow
major-to-minor order (slowest varying dimension first),
not the XLA's minor-to-major order.
On-device dimensions can be padded. TFE_TensorDebugInfoOnDeviceDim returns
the number of elements in a dimension after padding.
The return value was current at the time of TFE_TensorDebugInfo creation.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern int64_t TFE_TensorDebugInfoOnDeviceDim(
TFE_TensorDebugInfo* debug_info, int dim_index);
=head2 TFE_NewOp
=over 2
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_Op* TFE_NewOp(TFE_Context* ctx,
const char* op_or_function_name,
TF_Status* status);
=head2 TFE_DeleteOp
=over 2
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_DeleteOp(TFE_Op* op);
=head2 TFE_OpGetName
=over 2
Returns the op or function name `op` will execute.
The returned string remains valid throughout the lifetime of 'op'.
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern const char* TFE_OpGetName(const TFE_Op* op,
TF_Status* status);
=head2 TFE_OpGetContext
=over 2
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern TFE_Context* TFE_OpGetContext(const TFE_Op* op,
TF_Status* status);
=head2 TFE_OpSetDevice
=over 2
=back
/* From <tensorflow/c/eager/c_api.h> */
TF_CAPI_EXPORT extern void TFE_OpSetDevice(TFE_Op* op, const char* device_name,
TF_Status* status);
=head2 TFE_OpGetDevice
=over 2
The returned string remains valid throughout the lifetime of 'op'.
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
TF_CAPI_EXPORT void TFE_ContextSetRunEagerOpAsFunction(TFE_Context* ctx,
unsigned char enable,
TF_Status* status);
=head2 TFE_ContextSetJitCompileRewrite
=over 2
Enables rewrite jit_compile functions.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT void TFE_ContextSetJitCompileRewrite(TFE_Context* ctx,
unsigned char enable,
TF_Status* status);
=head2 TFE_TensorHandleDeviceType
=over 2
Returns the device type of the operation that produced `h`.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern const char* TFE_TensorHandleDeviceType(
TFE_TensorHandle* h, TF_Status* status);
=head2 TFE_TensorHandleDeviceID
=over 2
Returns the device ID of the operation that produced `h`.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern int TFE_TensorHandleDeviceID(TFE_TensorHandle* h,
TF_Status* status);
=head2 TFE_TensorHandleGetStatus
=over 2
Returns the status for the tensor handle. In TFRT, a tensor handle can carry
error info if error happens. If so, the status will be set with the error
info. If not, status will be set as OK.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_TensorHandleGetStatus(TFE_TensorHandle* h,
TF_Status* status);
=head2 TFE_GetExecutedOpNames
=over 2
Get a comma-separated list of op names executed in graph functions dispatched
to `ctx`. This feature is currently only enabled for TFRT debug builds, for
performance and simplicity reasons.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_GetExecutedOpNames(TFE_Context* ctx,
TF_Buffer* buf,
TF_Status* status);
=head2 TFE_SetLogicalCpuDevices
=over 2
Set logical devices to the context's device manager.
If logical devices are already configured at context initialization
through TFE_ContextOptions, this method should not be called.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_SetLogicalCpuDevices(TFE_Context* ctx,
int num_cpus,
const char* prefix,
TF_Status* status);
=head2 TFE_InsertConfigKeyValue
=over 2
Set configuration key and value using coordination service.
If coordination service is enabled, the key-value will be stored on the
leader and become accessible to all workers in the cluster.
Currently, a config key can only be set with one value, and subsequently
setting the same key will lead to errors.
Note that the key-values are only expected to be used for cluster
configuration data, and should not be used for storing a large amount of data
or being accessed very frequently.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_InsertConfigKeyValue(TFE_Context* ctx,
const char* key,
const char* value,
TF_Status* status);
=head2 TFE_GetConfigKeyValue
=over 2
Get configuration key and value using coordination service.
The config key must be set before getting its value. Getting value of
non-existing config keys will result in errors.
=back
/* From <tensorflow/c/eager/c_api_experimental.h> */
TF_CAPI_EXPORT extern void TFE_GetConfigKeyValue(TFE_Context* ctx,
const char* key,
( run in 1.849 second using v1.01-cache-2.11-cpan-0bd6704ced7 )