Data-Queue-Shared

 view release on metacpan or  search on metacpan

README  view on Meta::CPAN

        per-slot publication state machine instead of a shared mutex, and
        measures 1.3x-4.7x faster than Queue::Str depending on contention (1
        vs 8 writers, 32-byte payloads, single box). Use that if your
        messages share an upper bound and you want lock-free-style scaling.

  Features
    *   File-backed mmap for cross-process sharing

    *   Lock-free MPMC for integer queues (Vyukov algorithm)

    *   Futex-based blocking wait with timeout (no busy-spin)

    *   PID-based stale lock recovery (dead process detection)

    *   Batch push/pop operations

    *   Circular arena for zero-fragmentation string storage

    *   Optional keyword API via XS::Parse::Keyword (zero method-dispatch
        overhead)

lib/Data/Queue/Shared.pm  view on Meta::CPAN

=back

=head2 Features

=over

=item * File-backed mmap for cross-process sharing

=item * Lock-free MPMC for integer queues (Vyukov algorithm)

=item * Futex-based blocking wait with timeout (no busy-spin)

=item * PID-based stale lock recovery (dead process detection)

=item * Batch push/pop operations

=item * Circular arena for zero-fragmentation string storage

=item * Optional keyword API via XS::Parse::Keyword (zero method-dispatch overhead)

=back

xt/write_visibility.t  view on Meta::CPAN

# How fast is a push in process A visible to a pop in process B?
# Baseline / regression doc. Typical: single-digit microseconds.

my $q = Data::Queue::Shared::Int->new(undef, 64);
my $pid = fork // die;
if ($pid == 0) {
    # Consumer — wait for N items and drain
    my $n = 0;
    while ($n < 1000) {
        if (defined(my $v = $q->pop)) { $n++; next; }
        # small busy wait
        for (1..100) {}
    }
    _exit(0);
}

# Producer: push 1000 items, time the loop
my $t0 = [gettimeofday];
for (1..1000) {
    while (!$q->push($_)) { for (1..10) {} }
}



( run in 0.743 second using v1.01-cache-2.11-cpan-39bf76dae61 )