Memory safety and concurrency problems always boil often down when it comes to multiple pieces of program is accessing same data. Rust ’s secret weapon for this is ownership of data can only lie with a single scope at any given time.
This is a discipline that mostly followed by System programmers, but Rust ’s compiler checks that statically for you. For concurrency, this means you can choose from a wide variety of paradigms, be it message passing, shared state, lock-free, or purely functional, and Rust will help you avoid common pitfalls.
A piece of data can be borrowed by multiple threads simultaneously, but it can not be mutated. “Lock data, not code” is strictly enforced in Rust.
Rust enforces safe usage of data type, by making every data type know if they can safely be shared and accessed by multiple threads. Even the most daring forms of sharing are guaranteed safe in Rust.
The owner can do anything it likes with DEC, including mutating it by pushing. What we really wanted is to give print_arr temporary access to DEC and then continue using it afterward.
If one has the ownership of data, it can lend its access to function you may call. Since borrows are temporary, use_vec retains ownership of the DEC ; it can continue using it after the call to print_arr returns.
The iterator keeps a pointer into the vector at the current and final positions, stepping one toward the other. Now that we’ve covered the basic ownership model in Rust, let’s see what it means for concurrency.
Proponents of the style emphasize the way that it ties together sharing and communication Rust ’s ownership makes it easy to turn that advice into a compiler-checked rule.
In Rust, data type should be considered safe to be sent between threads; then only they will be sent. As always in Rust, passing in a T to the send function means transferring ownership of it.
Both the models are equally important as more computers take advantage of their multiple processors. In current operating systems, an executed program’s code is run in a process, and the operating system manages multiple processes at once.
Within your program, you can also have independent parts that run simultaneously. The features that run these independent parts are called threads.
The function spawn
One thing worth highlighting is that the current MSC channel in Rust, are part of the standard library, not the language. Let’s say our main thread needs to receive messages from a ‘producer’, then distribute the work received from the producer among a pool of workers, aggregate results in some way via a single consumer, and finally get those final results back to the main thread.
A solution to this problem is simply to turn the design on it’s head, and have each worker provide a “sender” for the producer to send work on. Again, just have that thread share its sender (or perhaps a clone thereof, in case you have multiple consumers).
Basically, loops that can combine receiving messages from other threads, with performing some “thread-local work”. Just trying to highlight that a loop combined with messaging can be a nice way to structure the execution of your program, even when it doesn’t have anything to do with asynchronous I/O.
The “event-loop” in our case is not “a place to handle events from the system in the context of asynchronous I/O”, rather the event-loop is a place for a concurrent component to execute things sequentially in a thread-local way, while perhaps letting the outside world in via receiving and handling of messages. The great benefit is that, while you are dealing with potentially complex concurrent logic, what you are looking at is quite simply a loop, with a predictable sequential type of logic at each iteration that is quite easy to reason about.
Remember this is happening at the ‘fan-in’ stage, and the consumer is receiving messages sent by the pool of executors. This looks like an infinite loop, but it’s not, because once all the executors have dropped their end of the channel, and the consumer thread will exit.
An example of the power of this approach: When the consumer receives messages, it will mutate track_steps ”, in order to “aggregate” the work that is being done by the workers. So while this mutation is done “in response” to receiving a message from another thread, “when”, and “whether”, this is done as part of an iteration of the consumer’s loop is entirely predictable.
These little things give you tremendous power to control the behavior of your component at each iteration of their loop, and therefore of your system as a whole. There is simply no guesswork involved regarding how this Quit message will travel from one event-loop to the other, and affect the behavior of individual components and the entire system.
Another way to achieve a similar result, could be to have the main thread share a sender to each worker for them to signal when they want to be “” or “” on the queue to receive more work, which could look something like this. That would require a form of keeping track of workers by their “ID”, preferably baked right into the messaging.
The pattern of wrapping a Sender inside a struct, and associating a “send” with data, or another operation, can have different real world use cases, such as: Forwarding messages, while adding extra information such as the ID of the source component(similar to our use case).
Well, apparently Rust is smart enough to understand that this ‘continue’ isn’t assigning anything, but rather ending that particular arm of the program flow(You can also use ‘return’ similarly). So that means that’s we’ll only get to this point at the end when all the workflows have been executed, and we’ve received a message from the consumer.
In this post I’ll be covering what we learned, and how the Rust compiler saves you from some scary concurrency issues. Concurrency is the process of breaking a program down into units that can be executed in any arbitrary order.
A single computer program can create many threads, which themselves will execute in an undetermined order. The first reason is to take advantage of systems with multiple processors.
In order to do this, a program needs to be written in units that can be run independently, and threads are a great way to accomplish this. Even if your system only has a single processor, there are still good reasons to use threads.
If your program spends a lot of time waiting around, for instance, to make network requests, you might want to pause the thread that is waiting and switch to a thread that is ready to do work. Most modern operating systems have native support for threads.
Depending on the API, certain pieces of data can be shared between the parent process and the thread being spun off. Rather than actually creating a new process on the system and yielding control to the operating system, green threads don’t actually create any new OS construct They simply give programmers a thread-like interface where in which units of execution can be defined.
The downside is that in order to take advantage of multiple cores, OS-level threads must be used as well (see this for more detail). The main thing to note here is that we are passing a closure to thread::spawn().
Don’t worry too much about the move keyword, I’ll be explaining this in more detail later. When running this program–we can see that two new threads get created, with the same parent process of ‘./target/debugging/ concurrency GBP`.
As I discuss in my last blog post, in Rust, after variables go out of scope, the memory that is allocated to them is reallocated and returned to the operating system. An important detail in the last example is that when we call v.push() in the context of the thread, ownership of v is still in the enclosing scope.
If we wanted to move the ownership of v to the thread, Rust has a handy construct for that, the keyword. The downside of doing this is that once ownership has been moved to the thread, the variable can no longer be used in the enclosing scope.
Let’s take the following code example, in which we pass a mutable reference down to a function that creates a thread: Now, that is the case with inner_func –the vector v does live longer than the function call to inner_func.
However, the Rust compiler is smart enough to recognize that ownership of ref is passed to a closure that could potentially live longer than main, and since that is the case, ref must comply with the “static” lifetime. String literals by default have a 'static lifetime, and any values declared with the static keyword are as well.
Yet again, using lifetimes this time, the Rust compiler prevents a concurrency bug! So far, I’ve given a bunch of examples of things that you can’t do in Rust in regard to concurrency.
Arc is a reference counter and allows multiple threads to take ownership of some piece of data. The Mute is another important structure that allows you to perform atomic operations within each thread.
Here’s how we would accomplish adding values to a vector in threads, using Arc and Mute : When.clone() is called on the Arc, it increases the reference count, and grants ownership to the data to enclosing scope.