Discussion
implementation add tag
Rowan
Repo:

https://github.com/vegabook/BeamQN

Use Cases:

1. Continuously growing lazy lists that can be operated on with BQN functions (streaming financial data from an external resource).
2. Operating on Erlang data with BQN functions without any strict latency requirements (on-demand analysis of data from an external resource).
3. Potential integration with persistent databases (OLAP or OLTP).

Prior Work:
1. https://github.com/cannadayr/ebqn
2. https://github.com/cannadayr/rsbqn
3. https://github.com/relaypro-open/gen_q
4. https://github.com/gordonguthrie/pometo
5. https://github.com/mlochbaum/BQN/blob/master/docs/bqn.js

Approaches:
1. A BQN virtual machine in Erlang (EBQN).
2. A BQN virtual machine in Rust, integrated into the BEAM as a NIF (RSBQN).
3. CBQN integrated into the BEAM as a NIF.
4. CBQN integrated into the BEAM as a port driver.
5. CBQN integrated into the BEAM as a port.
6. Compiling BQN to BEAM bytecode.
7. A BQN interpreter in erlang.

Analysis:

1. A BQN virtual machine in Erlang (EBQN).

	Pros:

	* This approach has the benefit of not requiring any native code.
    
    * No cooperative yielding to prevent scheduler collapse.
    
    * All terms are native erlang terms.
    
    Cons:
    
    * EBQN was extremely slow, in particular to run the compiler stages.
    * Required using process terms as a heap.
    
    Potential Improvements:
    
    * The virtual machine could instead generate erlang code that is compiled, and benefits from the JIT (similar to Javascript BQN implementation).

2. A BQN virtual machine in Rust, integrated into the BEAM as a NIF (RSBQN).

	Pros:
    
    * Can be integrated into the BEAM using the production-ready rustler library.
    * Can be modified to yield to the Erlang scheduler (see https://github.com/rusterlium/rustler/pull/232).
    * Can make some guarantees about software bugs that could lead to BEAM runtime crashes.
    * Significantly faster than EBQN.
    * Concurrent garbage collector (https://docs.rs/crate/bacon_rajan_cc/latest)

	Cons:
            
    * Significantly slower than CBQN, especially with SIMD code operating on large arrays.
    * Additional BQN implementation maintenance (CBQN is the defacto BQN implementation).
    * Would require additional work to integrate.
        
    Potential Improvements:
    
    * There are projects that could potentially improve SIMD operations performance (https://github.com/minotaur-toolkit/minotaur).

3. CBQN integrated into the BEAM as a NIF.

    Pros:

    * CBQN is the defacto, high-performance BQN implementation.
    * Significant work has gone into optimization.

    Cons:

    * Modifying to use enif_schedule_nif to cooperatively yield to the scheduler might not be feasible.
    * Uses a mark-and-sweep garbage collector (cannot reliably make guarantees on collection time).
    * Must manage its own resources.
    * Would likely have to be a "dirty" NIF, running on its own thread, and interpreter handles would need to be managed from the BEAM.
    * Would likely need additional work to properly free interpreter heaps.
    * Fewer compiler level guarantees about crashing the BEAM.

    Potential Improvements:

    * Would have to familiarize ourselves with the CBQN source code, and determine feasibility of alternative GC and scheduler yielding.

4. CBQN integrated into the BEAM as a port driver.

    Pros:

    * Port drivers are asynchronous by default.
    * Only open-source, in-production integration of an array language into BEAM uses a port driver (gen_q).

    Cons:

    * Port drivers seem to be unadvised in the community (https://erlangforums.com/t/use-cases-for-port-drivers-when-is-it-better-to-use-a-port-driver-instead-of-a-nif/1772), however its not clear if theres any plans to remove them as a feature.

    Potential Improvements:

    * Needs more research.

5. CBQN integrated into the BEAM as a port.

    Pros:

    * Safest integration of an external application into the BEAM.
    * Simplest integration of an external application into the BEAM.
    * Asynchronous by default, communicates via message passing.
    * Will not crash the BEAM interpreter.

    Cons:

    * Significantly slower than a NIF or Port Driver.
    * Potential non-trivial overhead of encoding/decoding terms.
    * Potential non-trivial overhead of writing data to port handle.
    * Potential improvements seem unlikely (upper bound on performance).

    Potential Improvements:

    * See above.

6. Compiling BQN to BEAM bytecode.

    Pros:

    * Has advantages of both native BEAM bytecode, and sticking close to the upstream implementation.

    Cons:

    * Way beyond my immediate familiarity.
    * Might require modifying the BQN compiler to change output of intermediary representation - if it exists (IR).

    Potential Improvements:

    * Would be interesting as a research subject, but writing a bytecode compiler would take me significant effort at this time.

7. A BQN interpreter in erlang.

    Pros:

    * Has advantage of native BEAM bytecode.
    * No scheduler collapse.
    * No BEAM crashes.
    * All native Erlang terms.
    * Native BEAM term allocation and garbage collection.

    Cons:

    * Would require maintaining a separate implementation.
    * Would require modifying pometo to support BQN.

    Potential Improvements:

    * Singeli could potentially be used to improve SIMD performance for BQN primitives.
Top Answer
Rowan
Additional thoughts.

After reflection I agree with Vegabooks original suggestion of a CBQN "dirty" NIF.
In practice, it seems like it would function similarly to a port driver in a simpler interface.

I think the use-cases can largely be built using existing OTP libraries.
For example, heres a hypothetical architecture...

```
Data Handler -> OTP Logger -> Ticker -> Batch Insert Server --->Clickhouse
                                |-----> BQN Agent <-------------|
                                           ^
                                           |
                                        [CBQN Pool]
````

Do we want it in C or Rust? I would suggest C. All the libraries and headers are in C.

Important links:
* https://github.com/dzaima/CBQN/tree/839cadb221ac22eddc7cfd7138ed1f826f65a56d#limitations
* https://github.com/dzaima/CBQN/blob/839cadb221ac22eddc7cfd7138ed1f826f65a56d/docs/system.md#ffi
* https://www.erlang.org/doc/tutorial/nif
* https://www.erlang.org/doc/man/erl_nif.html

future stuff:
* debugging, profiling, safety
Way future stuff:
* Evaluate if we should work towards a "clean" NIF using `enif_consume_timeslice` and `enif_schedule_nif`.
Answer #2
Vegabook
It's Vegabook here (Thomas Browne) and I wanted to provide a few points of perspective, on my interest in BQN on the Beam, some of which I have already provided on Matrix. I provide this not because of me, per se, but because I believe that a non-trivial subset of data scientists will have similar perspectives, so my background may be useful. The below summarises some of my points in my chat with @Gander (Rowan Cannaday) two days ago (Friday 29 Dec 2023). 

**Background**

I am neither a systems programmer, nor a programming language designer. That said I have [been around](https://stackoverflow.com/questions/993984/what-are-the-advantages-of-numpy-over-regular-python-lists) in the world of vectorised Python for a long time, and know it and R very well. The latter's first-class vectors are a big draw for me. I work extensively with relationships between time series so vectors and indeed matrices, even tensors, linear algebra, are a big focus. I do not yet know BQN in any serious way but I have read Marshall Lochbaum and Dzaima's conversations, and website materials quite extensively, and have played with Dyalog APL in the past. So I'm not completely and utterly green. 


**Interest in BQN**

R in particular opens up the possibility of very concise code operating on large mathematical topologies, which is not only pleasant in the sense of power it purveys, but usually it is semantically much clearer what one is trying to achieve, mathematically, than the constructs available in imperative programming languages. So I'm sold on array programming. 

I have certain issues with mathematical notation, so something that is still concise and powerful, but maps to computing more explicitly, makes complete sense. 

Marshall seems to have created something extremely rigorous in its thought processes and designs, and the CBQN implementation looks seriously capable of taking on the incumbents in terms of performance. Unfortunately, it is my view that while performance is not always a limiting factor, in my domain with often millions, sometimes billions, of data points, performance has to be within one order of magnitude of the competitors (Numpy and R). 


**Interest in the BEAM**

I work with streaming data extensively. Inevitably, and across domains, it is my experience that the older the data, the exponentially less useful it becomes. "Live" data is where the real opportunity lies, and for this Python severely compromised, and R is essentially absent from even trying. Async in Python is fundamentally cooperative, with all its non-deterministic downsides, and (mostly) single threaded (the multiprocessing module has its own large set of problems). R doesn't even attempt to be competitive in Async. Both can be thought of primarily as "batch" languages. Languages which do not have a REPL do not qualify for exploratory data science, in my opinion, and functional languages which do (Haskell, OCaml) but map their pre-emptive multitasking capabilities to POSIX threads, leave a lot of lightweight concurrency capability unaddressed. 

Enter the BEAM, which is designed first for systems which are "alive". Without going into large amounts of waffling about what makes it great, let's just say that working with it (which has a learning curve) is a revelation for anybody working with any system that is in "live" operation, and that includes ingesting and transforming real-time data of any kind, and in complex systems with many moving parts. 

Naturally, latency "guarantees" extract a significant performance cost, and in most benchmarks the BEAM is around Python-level speed (sans Numpy). This makes it completely unsuited to data science, unless one binds into C libraries. 

_A marriange of BQN and the BEAM therefore seems to combine two languages which are incredibly interesting and powerful in their own rights, but occupy fairly orthogonal feature spaces, both of which are IMO potentially interesting to many data scientists._


**Databases**

As an aside, I have worked with people who were experts in KDB/Q, and the tight integration between code and data in that (sadly very expensive) stack meant that for time series work these programmers were able to perform almost magical feats. I have extensive experience with row- and column-oriented databases of all kinds, and there's probably an opportunity to do something with BQN in this space. 


**Implementation**


I think Rowan has summarised the tradeoffs pretty well. I'll only say that it would be good to have fairly mirroring of BEAM semantics, namely, the ability to "spawn" BQN instances, even if, naturally, these would likely be much more heavyweight than the fine granularity of BEAM processes. My interest is in having BEAM ingest large amounts of real time data, and having BQN do periodic "minibatch" algorithmic model re-calibration, and also, to be able to use BQN for data exploration within the BEAM live environment. I have no idea of how BEAM types will map to BQN types, but it goes without saying that this will have to be efficient in order not to incur large [de]serialization issues. 


**My contribution**

I am working on a modular library (nominally caled [BLXX](https://github.com/vegabook/blxx) for now) for bringing in live data from APIs into the BEAM. The first API I am implementing is one for the Bloomberg Terminal, to which I have a subscription. The Bloomberg terminal provides vast breadth and depth of data, and this library will afford the opportunity to test "BeamQN" extensively with real, large amounts of data. I plan to use this capability to climb the BQN learning curve with the motivation that I will not just be working with toy data. This will be the real deal and the output I plan to use actually performing useful work. Please note that finance is one domain I work in, but that the library I am working on will be generic enough to work on any form of API-based streaming data that also allows for querying of historic data points. I am designing it with "behaviours" (~"interfaces" in other languages) that will allow for others to add streaming data sources in a consistent way. 

As I polish up my rusty C/Assembler skills I may become more directly useful in actual coding for this project, in the months ahead. 

Meantime I am excited enough in the possibilities of "BeamQN" that I am prepared to devote serious time to testing, documentation, and real world marketing / evangelism. 


Thomas

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.