Alongside these simple steps, some noticeable and relatively expensive changes can substantially reduce your RUST load times. You can march from one end of the island to the other without having to stop for pesky loading bars or stalling splash screens.
The way RUST accommodates this is to have your PC load the server-specific island with a hexadecimal seed code. Every hill, valley, pig, bear, and monument must be accounted for, and most importantly, the same for every player.
Since it would take a massive transfer to send each player a copy of the map, they send the seed instead, and your RUST client does the hard work of building that server’s island from that seed. Games that don’t have variable maps can give the player a hard-baked version of the information required.
The seed development must take place on top of standard game loading features, as the client caches textures, geometry, and other server information. One route is free and easy, requiring simple and quick changes to software and settings that you may not have considered.
The drawback of doing this is that you will potentially load into RUST with graphical errors and unloaded textures that will take roughly the same amount of time to repair as if you had simply waited for the asset load to finish. However, the RUST client has issues with doing this, and your load times will improve simply by avoiding the practice.
While the Steam client should remain up to date automatically, it can stall and require manual updating. As your hard drive continuously loads and unloads files, sometimes bits of data fall out of place.
The repair process for getting things back in order is to run a defray on your PC. Start the program, and you will see information on the state of drive fragmentation and how long it’s been since a def ragging.
It’s not a bad idea to set up a defray schedule so that your PC runs through the process once every few weeks. Depending on the drives’ state, def ragging can take some time, and it’s better done when you’re not actively using the PC, so let it go overnight if it’s got a lot to do.
However, for substantial gains in improving RUST ’s load time, you may have to consider purchasing upgraded hardware for your rig. As RUST loads, it must also cache as much of the client’s information as possible, which is your RAM’s job.
There may be some RAM timing slightly more favorable to your setup, but that again will cap out any loading gains quickly. While there’s no need for an SSD speed for media or basic PC functioning, an SSD will load things incredibly fast, making it ideal for operating systems and, you guessed it, games like RUST.
So Name or SSD hard drives are significant improvements over the traditional SATA HDD. If you can’t afford hardware upgrades right now, focus on the software changes and improvements you can make and see what works for you.
All trademarks are property of their respective owners in the US and other countries. We will learn how to package our Rust application as a Docker container to deploy it on DigitalOcean's App Platform.
Pick a random book on web development or an introduction to framework XYZ. That is why most authors steer away from the topic: it takes many pages and it is painful to write something down to realize, one or two years later, that it is already out of date.
That is why we are talking about deployment as early as chapter five: to give you the chance to practice this muscle for the rest of the book, as you would actually be doing if this was a real commercial project. We are particularly interested, in fact, on how the engineering practice of continuous deployment influences our design choices and development habits.
We have to be pragmatic and strike a balance between intrinsic usefulness (i.e. learn a tool that is valued in the industry) and developer experience. Production environments, instead, have a much narrower focus: running our software to make it available to our users.
Anything that is not strictly related to that goal is either a waste of resources, at best, or a security liability, at worst. This discrepancy has historically made deployments fairly troublesome, leading to the now remedied complaint “It works on my machine!”.
Our software is likely to make assumptions on the capabilities exposed by the underlying operating system (e.g. a native Windows application will not run on Linux), on the availability of other software on the same machine (e.g. a certain version of the Python interpreter) or on its configuration (e.g. do I have root permissions?). Even if we started with two identical environments we would, over time, run into troubles as versions drift and subtle inconsistencies come up to haunt our nights and weekends.
The easiest way to ensure that our software runs correctly is to tightly control the environment it is being executed into. It would work great for both sides: less Friday-night surprises for you, the developer; a consistent abstraction to build on top of for those in charge of the production infrastructure.
We are looking for something that is easy to use (great developer experience, minimal unnecessary complexity) and fairly established. In November 2020, the intersection of those two requirements seems to be Digital Ocean, in particular their newly launched App Platform proposition.
You can picture the Docker image you are building as its own fully isolated environment. The only point of contact between the image and your local machine are commands like COPY or ADD : the build context determines what files on your host machine are visible inside the Docker container to COPY and its friends.
As build context implies, for example, that Docker will not allow COPY to see files from the parent directory or from arbitrary paths on your machine into the image. Sqlx calls into our database at compile-time to ensure that all queries can be successfully executed considering the schemas of our tables.
When running cargo build inside our Docker image, though, SQL fails to establish a connection with the database that the DATABASE_URL environment variable in the .env file points to. We could allow our image to talk to a database running on our local machine at build time using the --network flag.
This is the strategy we follow in our CI pipeline given that we need the database anyway to run our integration tests. In other words, prepare performs the same work that is usually done when cargo build is invoked but it saves the outcome of those queries to a metadata file (sqlx-data.Jason) which can later be detected by SQL itself and used to skip the queries altogether and perform an offline build.
By default, Docker images do not expose their ports to the underlying host machine. Trying to hit the health check endpoint will trigger the same error message.
We need to use 0.0.0.0 as host to instruct our application to accept connections from any network interface, not just the local one. We should be careful though: using 0.0.0.0 significantly increases the “audience” of our application, with some security implications.
This is extremely convenient: it can take quite a long time to build our image (and it certainly does in Rust !) To actually use the image we only need to pay for its download cost which is directly related to its size.
We can use the bare operating system as base image (Debian:buster-slim) for our runtime stage: Rust shines at runtime, consistently delivering great performance, but it comes at a cost: compilation times.
Quite common on web development projects like ours that are pulling in many foundational crates from the asynchronous ecosystem (Tokyo, actix-web, SQL, etc. The trick is optimizing the order of operations in your Docker file: anything that refers to files that are changing often (e.g. source code) should appear as late as possible, therefore maximizing the likelihood of the previous step being unchanged and allowing Docker to retrieve the result straight from the cache.
This guarantees that most of the work is cached as long as your dependency tree does not change between one build and the next. Once again, we can rely on a community project to expand cargo's default capability: cargo-chef.
We are using four stages: the first computes the recipe file, the second caches our dependencies, the third builds the binary and the fourth are our runtime environment. As long as our dependencies do not change the recipe.Jason file will stay the same, therefore the outcome of cargo chef cook --release --recipe-path recipe.Jason will be cached, massively speeding up our builds.
We are taking advantage of how Docker layer caching interacts with multi-stage builds: the COPY. Statement in the planner stage will invalidate the cache for the planner container, but it will not invalidate the cache for the cache container as long as the checksum of the recipe.Jason returned by cargo chef prepare does not change.
The POST/subscriptions endpoint is still failing, in the very same way it did locally: we do not have a live database backing our application in our production environment. In the meantime we need to figure out how to point our application at the database in production.
This allows us to customize any value in our Settings struct using environment variables, overriding what is specified in our configuration files. It makes it possible to inject values that are too dynamic (i.e. not known a priori) or too sensitive to be stored in version control.
It also makes it fast to change the behavior of our application: we do not have to go through a full re-build if we want to tune one of those values (e.g. the database port). For languages like Rust, where a fresh build can take ten minutes or more, this can make the difference between a short outage and a substantial service degradation with customer-visible impact.
We will change its two methods to return a PgConnectOptions instead of a connection string: it will make it easier to manage all these moving parts. We want require_ssl to be false when we run the application locally (and for our test suite), but true in our production environment.