AI-TensorFlow-Libtensorflow
    
    
  
  
  
view release on metacpan or search on metacpan
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
  The effect is bounding the memory held by inflight TensorHandles that are
  referenced by the inflight nodes.
  A recommended value has not been established.
  A value of 0 removes the limit, which is the behavior of TensorFlow 2.11.
  When is_async is false, the value is ignored.
=back
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern TFE_Executor* TFE_NewExecutor(
      bool is_async, bool enable_streaming_enqueue, int in_flight_nodes_limit);
=head2 TFE_DeleteExecutor
=over 2
  Deletes the eager Executor without waiting for enqueued nodes. Please call
  TFE_ExecutorWaitForAllPendingNodes before calling this API if you want to
  make sure all nodes are finished.
=back
lib/AI/TensorFlow/Libtensorflow/Manual/CAPI.pod view on Meta::CPAN
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern bool TFE_ContextCheckAlive(TFE_Context* ctx,
                                                   const char* worker_name,
                                                   TF_Status* status);
=head2 TFE_ContextAsyncWait
=over 2
  Sync pending nodes in local executors (including the context default executor
  and thread executors) and streaming requests to remote executors, and get the
  combined status.
=back
  /* From <tensorflow/c/eager/c_api_experimental.h> */
  TF_CAPI_EXPORT extern void TFE_ContextAsyncWait(TFE_Context* ctx,
                                                  TF_Status* status);
=head2 TFE_TensorHandleDevicePointer
( run in 0.591 second using v1.01-cache-2.11-cpan-c333fce770f )