Skip to content
Merged
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Changelog

## Upcoming version

### Added

- \[[#327](https://github.com/rust-vmm/vm-memory/pull/327)\] I/O virtual memory support via `IoMemory`, `IommuMemory`, and `Iommu`/`Iotlb`

## \[v0.17.1\]

No visible changes.
Expand Down
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,15 @@ default = ["rawfd"]
backend-bitmap = ["dep:libc", "dep:winapi"]
backend-mmap = ["dep:libc", "dep:winapi"]
backend-atomic = ["arc-swap"]
iommu = ["dep:rangemap"]
rawfd = ["dep:libc"]
xen = ["backend-mmap", "bitflags", "vmm-sys-util"]

[dependencies]
libc = { version = "0.2.39", optional = true }
arc-swap = { version = "1.0.0", optional = true }
bitflags = { version = "2.4.0", optional = true }
rangemap = { version = "1.5.1", optional = true }
thiserror = "2.0.16"
vmm-sys-util = { version = ">=0.12.1, <=0.15.0", optional = true }

Expand Down
30 changes: 27 additions & 3 deletions DESIGN.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

## Objectives

- Provide a set of traits for accessing and configuring the physical memory of
a virtual machine.
- Provide a set of traits for accessing and configuring the physical and/or
I/O virtual memory of a virtual machine.
- Provide a clean abstraction of the VM memory such that rust-vmm components
can use it without depending on the implementation details specific to
different VMMs.
Expand Down Expand Up @@ -122,6 +122,29 @@ let buf = &mut [0u8; 5];
let result = guest_memory_mmap.write(buf, addr);
```

### I/O Virtual Address Space

When using an IOMMU, there no longer is direct access to the guest (physical)
address space, but instead only to I/O virtual address space. In this case:

- `IoMemory` replaces `GuestMemory`: It requires specifying the required access
permissions (which are relevant for virtual memory). It also removes
interfaces that imply a mostly linear memory layout, because virtual memory is
fragmented into many pages instead of few (large) memory regions.
- Any `IoMemory` still has a `GuestMemory` inside as the underlying address
space, but if an IOMMU is used, that will generally not be guest physical
address space. With vhost-user, for example, it will be the VMM’s user
address space instead.
- `IommuMemory` as our only actually IOMMU-supporting `IoMemory`
implementation uses an `Iommu` object to translate I/O virtual addresses
(IOVAs) into VMM user addresses (VUAs), which are then passed to the inner
`GuestMemory` implementation (like `GuestMemoryMmap`).
- `GuestAddress` (for compatibility) refers to an address in any of these
address spaces:
- Guest physical addresses (GPAs) when no IOMMU is used,
- I/O virtual addresses (IOVAs),
- VMM user addresses (VUAs).

### Utilities and Helpers

The following utilities and helper traits/macros are imported from the
Expand All @@ -143,7 +166,8 @@ with minor changes:
- `Address` inherits `AddressValue`
- `GuestMemoryRegion` inherits `Bytes<MemoryRegionAddress, E = Error>`. The
`Bytes` trait must be implemented.
- `GuestMemory` has a generic implementation of `Bytes<GuestAddress>`.
- `GuestMemory` has a generic implementation of `IoMemory`
- `IoMemory` has a generic implementation of `Bytes<GuestAddress>`.

**Types**:

Expand Down
2 changes: 1 addition & 1 deletion coverage_config_aarch64.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"coverage_score": 85.2,
"exclude_path": "mmap/windows.rs",
"crate_features": "backend-mmap,backend-atomic,backend-bitmap"
"crate_features": "backend-mmap,backend-atomic,backend-bitmap,iommu"
}
4 changes: 2 additions & 2 deletions coverage_config_x86_64.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"coverage_score": 90.82,
"coverage_score": 91.52,
"exclude_path": "mmap_windows.rs",
"crate_features": "backend-mmap,backend-atomic,backend-bitmap"
"crate_features": "backend-mmap,backend-atomic,backend-bitmap,iommu"
}
41 changes: 21 additions & 20 deletions src/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
// Copyright (C) 2020 Red Hat, Inc. All rights reserved.
// SPDX-License-Identifier: Apache-2.0

//! A wrapper over an `ArcSwap<GuestMemory>` struct to support RCU-style mutability.
//! A wrapper over an `ArcSwap<IoMemory>` struct to support RCU-style mutability.
//!
//! With the `backend-atomic` feature enabled, simply replacing `GuestMemoryMmap`
//! with `GuestMemoryAtomic<GuestMemoryMmap>` will enable support for mutable memory maps.
Expand All @@ -15,17 +15,17 @@ use arc_swap::{ArcSwap, Guard};
use std::ops::Deref;
use std::sync::{Arc, LockResult, Mutex, MutexGuard, PoisonError};

use crate::{GuestAddressSpace, GuestMemory};
use crate::{GuestAddressSpace, IoMemory};

/// A fast implementation of a mutable collection of memory regions.
///
/// This implementation uses `ArcSwap` to provide RCU-like snapshotting of the memory map:
/// every update of the memory map creates a completely new `GuestMemory` object, and
/// every update of the memory map creates a completely new `IoMemory` object, and
/// readers will not be blocked because the copies they retrieved will be collected once
/// no one can access them anymore. Under the assumption that updates to the memory map
/// are rare, this allows a very efficient implementation of the `memory()` method.
#[derive(Debug)]
pub struct GuestMemoryAtomic<M: GuestMemory> {
pub struct GuestMemoryAtomic<M: IoMemory> {
// GuestAddressSpace<M>, which we want to implement, is basically a drop-in
// replacement for &M. Therefore, we need to pass to devices the `GuestMemoryAtomic`
// rather than a reference to it. To obtain this effect we wrap the actual fields
Expand All @@ -34,9 +34,9 @@ pub struct GuestMemoryAtomic<M: GuestMemory> {
inner: Arc<(ArcSwap<M>, Mutex<()>)>,
}

impl<M: GuestMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
impl<M: IoMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
/// create a new `GuestMemoryAtomic` object whose initial contents come from
/// the `map` reference counted `GuestMemory`.
/// the `map` reference counted `IoMemory`.
fn from(map: Arc<M>) -> Self {
let inner = (ArcSwap::new(map), Mutex::new(()));
GuestMemoryAtomic {
Expand All @@ -45,9 +45,9 @@ impl<M: GuestMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
}
}

impl<M: GuestMemory> GuestMemoryAtomic<M> {
impl<M: IoMemory> GuestMemoryAtomic<M> {
/// create a new `GuestMemoryAtomic` object whose initial contents come from
/// the `map` `GuestMemory`.
/// the `map` `IoMemory`.
pub fn new(map: M) -> Self {
Arc::new(map).into()
}
Expand Down Expand Up @@ -75,15 +75,15 @@ impl<M: GuestMemory> GuestMemoryAtomic<M> {
}
}

impl<M: GuestMemory> Clone for GuestMemoryAtomic<M> {
impl<M: IoMemory> Clone for GuestMemoryAtomic<M> {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
}
}
}

impl<M: GuestMemory> GuestAddressSpace for GuestMemoryAtomic<M> {
impl<M: IoMemory> GuestAddressSpace for GuestMemoryAtomic<M> {
type T = GuestMemoryLoadGuard<M>;
type M = M;

Expand All @@ -94,14 +94,14 @@ impl<M: GuestMemory> GuestAddressSpace for GuestMemoryAtomic<M> {

/// A guard that provides temporary access to a `GuestMemoryAtomic`. This
/// object is returned from the `memory()` method. It dereference to
/// a snapshot of the `GuestMemory`, so it can be used transparently to
/// a snapshot of the `IoMemory`, so it can be used transparently to
/// access memory.
#[derive(Debug)]
pub struct GuestMemoryLoadGuard<M: GuestMemory> {
pub struct GuestMemoryLoadGuard<M: IoMemory> {
guard: Guard<Arc<M>>,
}

impl<M: GuestMemory> GuestMemoryLoadGuard<M> {
impl<M: IoMemory> GuestMemoryLoadGuard<M> {
/// Make a clone of the held pointer and returns it. This is more
/// expensive than just using the snapshot, but it allows to hold on
/// to the snapshot outside the scope of the guard. It also allows
Expand All @@ -112,15 +112,15 @@ impl<M: GuestMemory> GuestMemoryLoadGuard<M> {
}
}

impl<M: GuestMemory> Clone for GuestMemoryLoadGuard<M> {
impl<M: IoMemory> Clone for GuestMemoryLoadGuard<M> {
fn clone(&self) -> Self {
GuestMemoryLoadGuard {
guard: Guard::from_inner(Arc::clone(&*self.guard)),
}
}
}

impl<M: GuestMemory> Deref for GuestMemoryLoadGuard<M> {
impl<M: IoMemory> Deref for GuestMemoryLoadGuard<M> {
type Target = M;

fn deref(&self) -> &Self::Target {
Expand All @@ -133,12 +133,12 @@ impl<M: GuestMemory> Deref for GuestMemoryLoadGuard<M> {
/// possibly after updating the memory map represented by the
/// `GuestMemoryAtomic` that created the guard.
#[derive(Debug)]
pub struct GuestMemoryExclusiveGuard<'a, M: GuestMemory> {
pub struct GuestMemoryExclusiveGuard<'a, M: IoMemory> {
parent: &'a GuestMemoryAtomic<M>,
_guard: MutexGuard<'a, ()>,
}

impl<M: GuestMemory> GuestMemoryExclusiveGuard<'_, M> {
impl<M: IoMemory> GuestMemoryExclusiveGuard<'_, M> {
/// Replace the memory map in the `GuestMemoryAtomic` that created the guard
/// with the new memory map, `map`. The lock is then dropped since this
/// method consumes the guard.
Expand All @@ -151,7 +151,7 @@ impl<M: GuestMemory> GuestMemoryExclusiveGuard<'_, M> {
mod tests {
use super::*;
use crate::region::tests::{new_guest_memory_collection_from_regions, Collection, MockRegion};
use crate::{GuestAddress, GuestMemory, GuestMemoryRegion, GuestUsize};
use crate::{GuestAddress, GuestMemory, GuestMemoryRegion, GuestUsize, IoMemory};

type GuestMemoryMmapAtomic = GuestMemoryAtomic<Collection>;

Expand All @@ -165,7 +165,8 @@ mod tests {
let mut iterated_regions = Vec::new();
let gmm = new_guest_memory_collection_from_regions(&regions).unwrap();
let gm = GuestMemoryMmapAtomic::new(gmm);
let mem = gm.memory();
let vmem = gm.memory();
let mem = vmem.physical_memory().unwrap();

for region in mem.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
Expand All @@ -184,7 +185,7 @@ mod tests {
.map(|x| (x.0, x.1))
.eq(iterated_regions.iter().copied()));

let mem2 = mem.into_inner();
let mem2 = vmem.into_inner();
for region in mem2.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
}
Expand Down
Loading