TensorlakeAI: Blocking, Deferred, And Asynchronous Function Calls

Alex Johnson
-
TensorlakeAI: Blocking, Deferred, And Asynchronous Function Calls

Hey guys! Let's dive into how we can make our TensorlakeAI function calls super flexible and powerful. We're talking about making them blocking by default, giving users the option to create futures for non-blocking behavior, and even setting up deferred function calls. This is all about giving you, the user, more control and making TensorlakeAI even more adaptable to your needs. This will enhance the overall user experience and give our users greater control over their operations.

Blocking Function Calls by Default

So, what's the deal with blocking function calls? Well, blocking function calls are the standard way things work. When you call a function, your code waits right there until the function is done executing and returns a result. It's like waiting in line at a coffee shop โ€“ you don't get your latte until the barista finishes making it. This is how the system currently works, and in many cases, it's perfectly fine. It's straightforward and easy to understand. The functions run sequentially, and your code follows a predictable flow. However, sometimes, you don't want to wait around. Sometimes, you need to keep things moving. This is where non-blocking calls, through the use of futures, come into play.

The Need for Control and Flexibility

Why are we even doing this? Because giving you control is key. By making calls blocking by default, we ensure that the system behaves predictably unless you explicitly request otherwise. This reduces the chances of unexpected behavior and makes it easier to reason about your code. But, we're not stopping there. We're also providing you with the ability to opt-out of the blocking behavior when you want more flexibility and control. This means you can design systems that can perform multiple tasks concurrently without being bogged down by operations that might take a long time to complete.

Implementation Details

We need to make all TensorlakeAI function calls blocking by default. The core idea is that when a function is called, it executes immediately, and the result is returned only when the function is finished. This is the behavior that most users expect. The functions must be able to return a Future object which allows the caller to manage the execution lifecycle. Under the hood, this might involve using threads or an asynchronous event loop to handle the actual execution. This will vary depending on the underlying implementation, but the core principle is consistent โ€“ the function call is designed to block by default.

User Experience

This approach gives users a consistent and predictable experience. When users call a TensorlakeAI function, they get the result right away, unless they choose to do otherwise. This is the simplest and most intuitive approach for many use cases. For advanced usage, users can leverage the future to achieve non-blocking behavior.

Introducing Futures for Non-Blocking Operations

Alright, so blocking calls are the default, but what if you want to be able to do other things while a function is running? That's where futures come in. Think of a future as a promise of a result. You get the future immediately, and it represents a function that will eventually give you the actual result. This is like ordering something online โ€“ you get a confirmation (the future), and the item (the result) arrives later.

Futures and the Standard Library

The most important aspect here is making sure our futures play nicely with the standard Python library. We need compatibility with asyncio.Future and concurrent.futures.Future. This is crucial for several reasons: it lets users integrate TensorlakeAI seamlessly with existing asynchronous code, and makes it easier to work with multithreading and multiprocessing. This will allow the users to easily leverage standard Python concurrency tools. This also future-proofs our implementation, allowing us to adopt asynchronous functions later without breaking user code.

asyncio.Future and concurrent.futures.Future

These are powerful tools. asyncio.Future is designed for single-threaded asynchronous programming, while concurrent.futures.Future is designed for multi-threading and multi-processing. Having compatibility with both offers maximum flexibility. Users can choose the best approach for their use case. This compatibility also means users can use standard library tools to manage the futures, such as asyncio.wait, asyncio.gather, or the Future.result() and Future.cancel() methods.

Example: Opt-in Non-Blocking Behavior

How do we make this work? You can implement it as follows. When a user calls a TensorlakeAI function, the system can return an asyncio.Future immediately. The function's actual work is then performed in a separate thread or an asynchronous task, and the future is set when the result is available. The user can then await the future (if they are in an async context) or call future.result() to get the result, blocking until it's ready. The key here is that the user explicitly chooses the non-blocking behavior. They must decide if they need that capability.

Deferred Function Calls

Let's talk about deferred function calls. This is like saying,

You may also like