Async-Redis

 view release on metacpan or  search on metacpan

examples/async-job-queue/PLAN.md  view on Meta::CPAN

- queue depth check with `LLEN`

For this step, enqueue jobs and then clean them up before exit so repeated
manual runs do not leave queue entries behind.

Verification:

```bash
perlbrew use perl-5.40.0@default
perl -c examples/async-job-queue/app.pl
REDIS_HOST=localhost perl examples/async-job-queue/app.pl --jobs 3 --workers 1 --delay 0.1
```

Expected:

- output includes `queued 3 jobs`
- queue depth reaches `3` before cleanup
- process exits `0`

Review after step:

- job naming cannot collide with the stop sentinel
- repeat runs start cleanly
- no worker logic has been mixed into producer helpers

## Step 5: Implement One Worker Loop

Add:

- `worker($id, $opts)` async helper
- one dedicated Redis connection per worker
- `BLPOP $queue_key 0`
- sentinel handling
- in-flight set add/remove
- simulated work with `Future::IO->sleep($delay)`
- processed counter increment
- worker start/finish output

Initially wire one worker and have the controller push one sentinel after all
jobs are processed.

Verification:

```bash
perlbrew use perl-5.40.0@default
perl -c examples/async-job-queue/app.pl
REDIS_HOST=localhost perl examples/async-job-queue/app.pl --jobs 2 --workers 1 --delay 0.1
```

Expected:

- both jobs are started and finished by `worker-1`
- final processed count is `2`
- worker exits after consuming the sentinel

Review after step:

- `BLPOP` response shape is handled correctly
- worker disconnects even on normal sentinel exit
- in-flight set is cleaned after each job
- no busy polling

## Step 6: Run Multiple Workers Concurrently

Change `main()` to start `$workers` worker futures concurrently and wait for
them all to finish.

Shutdown should push one sentinel per worker after all real jobs are processed.

Verification:

```bash
perlbrew use perl-5.40.0@default
perl -c examples/async-job-queue/app.pl
REDIS_HOST=localhost perl examples/async-job-queue/app.pl --jobs 4 --workers 2 --delay 0.2
```

Expected:

- output shows both `worker-1` and `worker-2`
- the first two jobs start near the same timestamp
- elapsed time is closer to `0.4s` than `0.8s`
- final processed count is `4`

Review after step:

- each worker uses its own Redis connection
- all worker futures are awaited
- sentinels cannot be counted as processed jobs
- no worker can be left blocked after completion

## Step 7: Add Heartbeat Task

Add:

- separate stats Redis connection
- `heartbeat($opts)` async helper
- heartbeat loop every `0.25s`
- output with `queue=`, `in_flight=`, and `processed=`
- stop condition when processed count reaches target

Verification:

```bash
perlbrew use perl-5.40.0@default
perl -c examples/async-job-queue/app.pl
REDIS_HOST=localhost perl examples/async-job-queue/app.pl --jobs 6 --workers 2 --delay 0.2
```

Expected:

- heartbeat lines appear while jobs are still running
- heartbeat continues while workers are sleeping
- queue depth starts above zero and drains
- in-flight count reflects active workers

Review after step:

- heartbeat uses its own Redis connection
- heartbeat exits naturally after target processed count
- heartbeat interval is not implemented with blocking sleep



( run in 0.846 second using v1.01-cache-2.11-cpan-39bf76dae61 )