Security
Headlines
HeadlinesLatestCVEs

Headline

CVE-2020-12888: [PATCH v2 3/3] vfio-pci: Invalidate mmaps and block MMIO access on disabled memory

The VFIO PCI driver in the Linux kernel through 5.6.13 mishandles attempts to access disabled memory space.

CVE
#linux#git#backdoor

From: Alex Williamson [email protected] To: [email protected] Cc: [email protected], [email protected], [email protected] Subject: [PATCH v2 3/3] vfio-pci: Invalidate mmaps and block MMIO access on disabled memory Date: Tue, 05 May 2020 15:55:02 -0600 [thread overview] Message-ID: [email protected] (raw) In-Reply-To: <[email protected]>

Accessing the disabled memory space of a PCI device would typically result in a master abort response on conventional PCI, or an unsupported request on PCI express. The user would generally see these as a -1 response for the read return data and the write would be silently discarded, possibly with an uncorrected, non-fatal AER error triggered on the host. Some systems however take it upon themselves to bring down the entire system when they see something that might indicate a loss of data, such as this discarded write to a disabled memory space.

To avoid this, we want to try to block the user from accessing memory spaces while they’re disabled. We start with a semaphore around the memory enable bit, where writers modify the memory enable state and must be serialized, while readers make use of the memory region and can access in parallel. Writers include both direct manipulation via the command register, as well as any reset path where the internal mechanics of the reset may both explicitly and implicitly disable memory access, and manipulation of the MSI-X configuration, where the MSI-X vector table resides in MMIO space of the device. Readers include the read and write file ops to access the vfio device fd offsets as well as memory mapped access. In the latter case, we make use of our new vma list support to zap, or invalidate, those memory mappings in order to force them to be faulted back in on access.

Our semaphore usage will stall user access to MMIO spaces across internal operations like reset, but the user might experience new behavior when trying to access the MMIO space while disabled via the PCI command register. Access via read or write while disabled will return -EIO and access via memory maps will result in a SIGBUS. This is expected to be compatible with known use cases and potentially provides better error handling capabilities than present in the hardware, while avoiding the more readily accessible and severe platform error responses that might otherwise occur.

Signed-off-by: Alex Williamson [email protected]

drivers/vfio/pci/vfio_pci.c | 263 ++++++++++++++++++++++++++++++±— drivers/vfio/pci/vfio_pci_config.c | 36 +++± drivers/vfio/pci/vfio_pci_intrs.c | 18 ++ drivers/vfio/pci/vfio_pci_private.h | 5 + drivers/vfio/pci/vfio_pci_rdwr.c | 12 ++ 5 files changed, 300 insertions(+), 34 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 66a545a01f8f…49ae9faa6099 100644 — a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -26,6 +26,7 @@ #include <linux/vfio.h> #include <linux/vgaarb.h> #include <linux/nospec.h> +#include <linux/sched/mm.h>

#include “vfio_pci_private.h”

@@ -184,6 +185,7 @@ static void vfio_pci_probe_mmaps(struct vfio_pci_device *vdev)

static void vfio_pci_try_bus_reset(struct vfio_pci_device *vdev); static void vfio_pci_disable(struct vfio_pci_device *vdev); +static int vfio_pci_try_zap_and_vma_lock_cb(struct pci_dev *pdev, void *data);

/* * INTx masking requires the ability to disable INTx signaling via PCI_COMMAND @@ -736,6 +738,12 @@ int vfio_pci_register_dev_region(struct vfio_pci_device *vdev, return 0; }

+struct vfio_devices {

  • struct vfio_device **devices;
  • int cur_index;
  • int max_index; +};

static long vfio_pci_ioctl(void *device_data, unsigned int cmd, unsigned long arg) { @@ -984,8 +992,16 @@ static long vfio_pci_ioctl(void *device_data, return ret;

} else if (cmd == VFIO\_DEVICE\_RESET) {

- return vdev->reset_works ?

  •       pci\_try\_reset\_function(vdev->pdev) : -EINVAL;
    
  •   int ret;
    
  •   if (!vdev->reset\_works)
    
  •       return -EINVAL;
    
  •   vfio\_pci\_zap\_and\_down\_write\_memory\_lock(vdev);
    
  •   ret = pci\_try\_reset\_function(vdev->pdev);
    
  •   up\_write(&vdev->memory\_lock);
    
  •   return ret;
    

    } else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) { struct vfio_pci_hot_reset_info hdr; @@ -1065,8 +1081,9 @@ static long vfio_pci_ioctl(void *device_data, int32_t *group_fds; struct vfio_pci_group_entry *groups; struct vfio_pci_group_info info;

  •   struct vfio\_devices devs = { .cur\_index = 0 };
      bool slot = false;
    

- int i, count = 0, ret = 0;

  •   int i, group\_idx, mem\_idx = 0, count = 0, ret = 0;
    
      minsz = offsetofend(struct vfio\_pci\_hot\_reset, count);
    

@@ -1118,9 +1135,9 @@ static long vfio_pci_ioctl(void *device_data, * user interface and store the group and iommu ID. This * ensures the group is held across the reset. */ - for (i = 0; i < hdr.count; i++) {

  •   for (group\_idx = 0; group\_idx < hdr.count; group\_idx++) {
          struct vfio\_group \*group;
    

- struct fd f = fdget(group_fds[i]);

  •       struct fd f = fdget(group\_fds\[group\_idx\]);
          if (!f.file) {
              ret = -EBADF;
              break;
    

@@ -1133,8 +1150,9 @@ static long vfio_pci_ioctl(void *device_data, break; }

- groups[i].group = group;

  •       groups\[i\].id = vfio\_external\_user\_iommu\_id(group);
    
  •       groups\[group\_idx\].group = group;
    
  •       groups\[group\_idx\].id =
    
  •               vfio\_external\_user\_iommu\_id(group);
      }
    
      kfree(group\_fds);
    

@@ -1153,13 +1171,63 @@ static long vfio_pci_ioctl(void *device_data, ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_validate_devs, &info, slot); - if (!ret)

  •       /\* User has access, do the reset \*/
    
  •       ret = pci\_reset\_bus(vdev->pdev);
    
  •   if (ret)
    
  •       goto hot\_reset\_release;
    
  •   devs.max\_index = count;
    
  •   devs.devices = kcalloc(count, sizeof(struct vfio\_device \*),
    
  •                  GFP\_KERNEL);
    
  •   if (!devs.devices) {
    
  •       ret = -ENOMEM;
    
  •       goto hot\_reset\_release;
    
  •   }
    
  •   /\*
    
  •    \* We need to get memory\_lock for each device, but devices
    
  •    \* can share mmap\_sem, therefore we need to zap and hold
    
  •    \* the vma\_lock for each device, and only then get each
    
  •    \* memory\_lock.
    
  •    \*/
    
  •   ret = vfio\_pci\_for\_each\_slot\_or\_bus(vdev->pdev,
    
  •                   vfio\_pci\_try\_zap\_and\_vma\_lock\_cb,
    
  •                   &devs, slot);
    
  •   if (ret)
    
  •       goto hot\_reset\_release;
    
  •   for (; mem\_idx < devs.cur\_index; mem\_idx++) {
    
  •       struct vfio\_pci\_device \*tmp;
    
  •       tmp = vfio\_device\_data(devs.devices\[mem\_idx\]);
    
  •       ret = down\_write\_trylock(&tmp->memory\_lock);
    
  •       if (!ret) {
    
  •           ret = -EBUSY;
    
  •           goto hot\_reset\_release;
    
  •       }
    
  •       mutex\_unlock(&tmp->vma\_lock);
    
  •   }
    
  •   /\* User has access, do the reset \*/
    
  •   ret = pci\_reset\_bus(vdev->pdev);
    

hot_reset_release: - for (i–; i >= 0; i–)

  •       vfio\_group\_put\_external\_user(groups\[i\].group);
    
  •   for (i = 0; i < devs.cur\_index; i++) {
    
  •       struct vfio\_device \*device;
    
  •       struct vfio\_pci\_device \*tmp;
    
  •       device = devs.devices\[i\];
    
  •       tmp = vfio\_device\_data(device);
    
  •       if (i < mem\_idx)
    
  •           up\_write(&tmp->memory\_lock);
    
  •       else
    
  •           mutex\_unlock(&tmp->vma\_lock);
    
  •       vfio\_device\_put(device);
    
  •   }
    
  •   kfree(devs.devices);
    
  •   for (group\_idx--; group\_idx >= 0; group\_idx--)
    
  •       vfio\_group\_put\_external\_user(groups\[group\_idx\].group);
    
      kfree(groups);
      return ret;
    

@@ -1299,8 +1367,107 @@ static ssize_t vfio_pci_write(void *device_data, const char __user *buf, return vfio_pci_rw(device_data, (char __user *)buf, count, ppos, true); }

-static int vfio_pci_add_vma(struct vfio_pci_device *vdev,

  •           struct vm\_area\_struct \*vma)
    

+/* Return 1 on zap and vma_lock acquired, 0 on contention (only with @try) */ +static int vfio_pci_zap_and_vma_lock(struct vfio_pci_device *vdev, bool try) +{

  • struct vfio_pci_mmap_vma *mmap_vma, *tmp;
  • /*
  • * Lock ordering:
  • * vma_lock is nested under mmap_sem for vm_ops callback paths.
  • * The memory_lock semaphore is used by both code paths calling
  • * into this function to zap vmas and the vm_ops.fault callback
  • * to protect the memory enable state of the device.
  • *
  • * When zapping vmas we need to maintain the mmap_sem => vma_lock
  • * ordering, which requires using vma_lock to walk vma_list to
  • * acquire an mm, then dropping vma_lock to get the mmap_sem and
  • * reacquiring vma_lock. This logic is derived from similar
  • * requirements in uverbs_user_mmap_disassociate().
  • *
  • * mmap_sem must always be the top-level lock when it is taken.
  • * Therefore we can only hold the memory_lock write lock when
  • * vma_list is empty, as we’d need to take mmap_sem to clear
  • * entries. vma_list can only be guaranteed empty when holding
  • * vma_lock, thus memory_lock is nested under vma_lock.
  • *
  • * This enables the vm_ops.fault callback to acquire vma_lock,
  • * followed by memory_lock read lock, while already holding
  • * mmap_sem without risk of deadlock.
  • */
  • while (1) {
  •   struct mm\_struct \*mm = NULL;
    
  •   if (try) {
    
  •       if (!mutex\_trylock(&vdev->vma\_lock))
    
  •           return 0;
    
  •   } else {
    
  •       mutex\_lock(&vdev->vma\_lock);
    
  •   }
    
  •   while (!list\_empty(&vdev->vma\_list)) {
    
  •       mmap\_vma = list\_first\_entry(&vdev->vma\_list,
    
  •                       struct vfio\_pci\_mmap\_vma,
    
  •                       vma\_next);
    
  •       mm = mmap\_vma->vma->vm\_mm;
    
  •       if (mmget\_not\_zero(mm))
    
  •           break;
    
  •       list\_del(&mmap\_vma->vma\_next);
    
  •       kfree(mmap\_vma);
    
  •       mm = NULL;
    
  •   }
    
  •   if (!mm)
    
  •       return 1;
    
  •   mutex\_unlock(&vdev->vma\_lock);
    
  •   if (try) {
    
  •       if (!down\_read\_trylock(&mm->mmap\_sem)) {
    
  •           mmput(mm);
    
  •           return 0;
    
  •       }
    
  •   } else {
    
  •       down\_read(&mm->mmap\_sem);
    
  •   }
    
  •   if (mmget\_still\_valid(mm)) {
    
  •       if (try) {
    
  •           if (!mutex\_trylock(&vdev->vma\_lock)) {
    
  •               up\_read(&mm->mmap\_sem);
    
  •               mmput(mm);
    
  •               return 0;
    
  •           }
    
  •       } else {
    
  •           mutex\_lock(&vdev->vma\_lock);
    
  •       }
    
  •       list\_for\_each\_entry\_safe(mmap\_vma, tmp,
    
  •                    &vdev->vma\_list, vma\_next) {
    
  •           struct vm\_area\_struct \*vma = mmap\_vma->vma;
    
  •           if (vma->vm\_mm != mm)
    
  •               continue;
    
  •           list\_del(&mmap\_vma->vma\_next);
    
  •           kfree(mmap\_vma);
    
  •           zap\_vma\_ptes(vma, vma->vm\_start,
    
  •                    vma->vm\_end - vma->vm\_start);
    
  •       }
    
  •       mutex\_unlock(&vdev->vma\_lock);
    
  •   }
    
  •   up\_read(&mm->mmap\_sem);
    
  •   mmput(mm);
    
  • } +}

+void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_device *vdev) +{

  • vfio_pci_zap_and_vma_lock(vdev, false);
  • down_write(&vdev->memory_lock);
  • mutex_unlock(&vdev->vma_lock); +}

+/* Caller holds vma_lock */ +static int __vfio_pci_add_vma(struct vfio_pci_device *vdev,

  •             struct vm\_area\_struct \*vma)
    

{ struct vfio_pci_mmap_vma *mmap_vma;

@@ -1309,10 +1476,7 @@ static int vfio_pci_add_vma(struct vfio_pci_device *vdev, return -ENOMEM;

mmap\_vma->vma = vma;

-

  • mutex_lock(&vdev->vma_lock); list_add(&mmap_vma->vma_next, &vdev->vma_list); - mutex_unlock(&vdev->vma_lock);

    return 0; } @@ -1346,15 +1510,32 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct vfio_pci_device *vdev = vma->vm_private_data;

  • vm_fault_t ret = VM_FAULT_NOPAGE;

- if (vfio_pci_add_vma(vdev, vma))

  •   return VM\_FAULT\_OOM;
    
  • mutex_lock(&vdev->vma_lock);

  • down_read(&vdev->memory_lock);

  • if (!__vfio_pci_memory_enabled(vdev)) {

  •   ret = VM\_FAULT\_SIGBUS;
    
  •   mutex\_unlock(&vdev->vma\_lock);
    
  •   goto up\_out;
    
  • }

  • if (__vfio_pci_add_vma(vdev, vma)) {

  •   ret = VM\_FAULT\_OOM;
    
  •   mutex\_unlock(&vdev->vma\_lock);
    
  •   goto up\_out;
    
  • }

  • mutex_unlock(&vdev->vma_lock);

    if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, vma->vm_end - vma->vm_start, vma->vm_page_prot)) - return VM_FAULT_SIGBUS;

  •   ret = VM\_FAULT\_SIGBUS;
    

- return VM_FAULT_NOPAGE; +up_out:

  • up_read(&vdev->memory_lock);
  • return ret; }

static const struct vm_operations_struct vfio_pci_mmap_ops = { @@ -1680,6 +1861,7 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) INIT_LIST_HEAD(&vdev->ioeventfds_list); mutex_init(&vdev->vma_lock); INIT_LIST_HEAD(&vdev->vma_list);

  • init_rwsem(&vdev->memory_lock);

    ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev); if (ret) @@ -1933,12 +2115,6 @@ static void vfio_pci_reflck_put(struct vfio_pci_reflck *reflck) kref_put_mutex(&reflck->kref, vfio_pci_reflck_release, &reflck_lock); }

-struct vfio_devices {

  • struct vfio_device **devices;
  • int cur_index;
  • int max_index; -};

static int vfio_pci_get_unused_devs(struct pci_dev *pdev, void *data) { struct vfio_devices *devs = data; @@ -1969,6 +2145,39 @@ static int vfio_pci_get_unused_devs(struct pci_dev *pdev, void *data) return 0; }

+static int vfio_pci_try_zap_and_vma_lock_cb(struct pci_dev *pdev, void *data) +{

  • struct vfio_devices *devs = data;
  • struct vfio_device *device;
  • struct vfio_pci_device *vdev;
  • if (devs->cur_index == devs->max_index)
  •   return -ENOSPC;
    
  • device = vfio_device_get_from_dev(&pdev->dev);
  • if (!device)
  •   return -EINVAL;
    
  • if (pci_dev_driver(pdev) != &vfio_pci_driver) {
  •   vfio\_device\_put(device);
    
  •   return -EBUSY;
    
  • }
  • vdev = vfio_device_data(device);
  • /*
  • * Locking multiple devices is prone to deadlock, runaway and
  • * unwind if we hit contention.
  • */
  • if (!vfio_pci_zap_and_vma_lock(vdev, true)) {
  •   vfio\_device\_put(device);
    
  •   return -EBUSY;
    
  • }
  • devs->devices[devs->cur_index++] = device;
  • return 0; +}

/* * If a bus or slot reset is available for the provided device and: * - All of the devices affected by that bus or slot reset are unused diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 90c0b80f8acf…3dcddbd572e6 100644 — a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -395,6 +395,14 @@ static inline void p_setd(struct perm_bits *p, int off, u32 virt, u32 write) *(__le32 *)(&p->write[off]) = cpu_to_le32(write); }

+/* Caller should hold memory_lock semaphore */ +bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev) +{

  • u16 cmd = le16_to_cpu(*(__le16 *)&vdev->vconfig[PCI_COMMAND]);
  • return cmd & PCI_COMMAND_MEMORY; +}

/* * Restore the *real* BARs after we detect a FLR or backdoor reset. * (backdoor = some device specific technique that we didn’t catch) @@ -556,13 +564,18 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,

    new\_cmd = le32\_to\_cpu(val);
  •   phys\_io = !!(phys\_cmd & PCI\_COMMAND\_IO);
    
  •   virt\_io = !!(le16\_to\_cpu(\*virt\_cmd) & PCI\_COMMAND\_IO);
    
  •   new\_io = !!(new\_cmd & PCI\_COMMAND\_IO);
    
  •   phys\_mem = !!(phys\_cmd & PCI\_COMMAND\_MEMORY);
      virt\_mem = !!(le16\_to\_cpu(\*virt\_cmd) & PCI\_COMMAND\_MEMORY);
      new\_mem = !!(new\_cmd & PCI\_COMMAND\_MEMORY);
    

- phys_io = !!(phys_cmd & PCI_COMMAND_IO);

  •   virt\_io = !!(le16\_to\_cpu(\*virt\_cmd) & PCI\_COMMAND\_IO);
    
  •   new\_io = !!(new\_cmd & PCI\_COMMAND\_IO);
    
  •   if (!new\_mem)
    
  •       vfio\_pci\_zap\_and\_down\_write\_memory\_lock(vdev);
    
  •   else
    
  •       down\_write(&vdev->memory\_lock);
    
      /\*
       \* If the user is writing mem/io enable (new\_mem/io) and we
    

@@ -579,8 +592,11 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos, }

count = vfio\_default\_config\_write(vdev, pos, count, perm, offset, val);

- if (count < 0)

  • if (count < 0) {

  •   if (offset == PCI\_COMMAND)
    
  •       up\_write(&vdev->memory\_lock);
      return count;
    
  • }

    /* * Save current memory/io enable bits in vconfig to allow for @@ -591,6 +607,8 @@ static int vfio_basic_config_write(struct vfio_pci_device *vdev, int pos,

    \*virt\_cmd &= cpu\_to\_le16(~mask);
    \*virt\_cmd |= cpu\_to\_le16(new\_cmd & mask);
    
  •   up\_write(&vdev->memory\_lock);
    

    }

    /* Emulate INTx disable */ @@ -828,8 +846,11 @@ static int vfio_exp_config_write(struct vfio_pci_device *vdev, int pos, pos - offset + PCI_EXP_DEVCAP, &cap);

- if (!ret && (cap & PCI_EXP_DEVCAP_FLR))

  •   if (!ret && (cap & PCI\_EXP\_DEVCAP\_FLR)) {
    
  •       vfio\_pci\_zap\_and\_down\_write\_memory\_lock(vdev);
          pci\_try\_reset\_function(vdev->pdev);
    
  •       up\_write(&vdev->memory\_lock);
    
  •   }
    

    }

    /* @@ -907,8 +928,11 @@ static int vfio_af_config_write(struct vfio_pci_device *vdev, int pos, pos - offset + PCI_AF_CAP, &cap);

- if (!ret && (cap & PCI_AF_CAP_FLR) && (cap & PCI_AF_CAP_TP))

  •   if (!ret && (cap & PCI\_AF\_CAP\_FLR) && (cap & PCI\_AF\_CAP\_TP)) {
    
  •       vfio\_pci\_zap\_and\_down\_write\_memory\_lock(vdev);
          pci\_try\_reset\_function(vdev->pdev);
    
  •       up\_write(&vdev->memory\_lock);
    
  •   }
    

    }

    return count; diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 2056f3f85f59…54102a7eb9d3 100644 — a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -626,6 +626,8 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags, int (*func)(struct vfio_pci_device *vdev, unsigned index, unsigned start, unsigned count, uint32_t flags, void *data) = NULL;

  • int ret;

  • u16 cmd;

    switch (index) { case VFIO_PCI_INTX_IRQ_INDEX: @@ -673,5 +675,19 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags, if (!func) return -ENOTTY;

- return func(vdev, index, start, count, flags, data);

  • if (index == VFIO_PCI_MSIX_IRQ_INDEX) {
  •   down\_write(&vdev->memory\_lock);
    
  •   pci\_read\_config\_word(vdev->pdev, PCI\_COMMAND, &cmd);
    
  •   pci\_write\_config\_word(vdev->pdev, PCI\_COMMAND,
    
  •                 cmd | PCI\_COMMAND\_MEMORY);
    
  • }
  • ret = func(vdev, index, start, count, flags, data);
  • if (index == VFIO_PCI_MSIX_IRQ_INDEX) {
  •   pci\_write\_config\_word(vdev->pdev, PCI\_COMMAND, cmd);
    
  •   up\_write(&vdev->memory\_lock);
    
  • }
  • return ret; } diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index 9b25f9f6ce1d…c4f25f1e80d7 100644 — a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -139,6 +139,7 @@ struct vfio_pci_device { struct notifier_block nb; struct mutex vma_lock; struct list_head vma_list;
  • struct rw_semaphore memory_lock; };

#define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX) @@ -181,6 +182,10 @@ extern int vfio_pci_register_dev_region(struct vfio_pci_device *vdev, extern int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state);

+extern bool __vfio_pci_memory_enabled(struct vfio_pci_device *vdev); +extern void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_device

  •                       \*vdev);
    

#ifdef CONFIG_VFIO_PCI_IGD extern int vfio_pci_igd_init(struct vfio_pci_device *vdev); #else diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c index a87992892a9f…f58c45308682 100644 — a/drivers/vfio/pci/vfio_pci_rdwr.c +++ b/drivers/vfio/pci/vfio_pci_rdwr.c @@ -162,6 +162,7 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf, size_t x_start = 0, x_end = 0; resource_size_t end; void __iomem *io;

  • struct resource *res = &vdev->pdev->resource[bar]; ssize_t done;

    if (pci_resource_start(pdev, bar)) @@ -200,8 +201,19 @@ ssize_t vfio_pci_bar_rw(struct vfio_pci_device *vdev, char __user *buf, x_end = vdev->msix_offset + vdev->msix_size; }

  • if (res->flags & IORESOURCE_MEM) {

  •   down\_read(&vdev->memory\_lock);
    
  •   if (!\_\_vfio\_pci\_memory\_enabled(vdev)) {
    
  •       up\_read(&vdev->memory\_lock);
    
  •       return -EIO;
    
  •   }
    
  • }

  • done = do_io_rw(io, buf, pos, count, x_start, x_end, iswrite);

  • if (res->flags & IORESOURCE_MEM)

  •   up\_read(&vdev->memory\_lock);
    
  • if (done >= 0) *ppos += done;

next prev parent reply other threads:[~2020-05-05 21:55 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-05-05 21:54 [PATCH v2 0/3] vfio-pci: Block user access to disabled device MMIO Alex Williamson 2020-05-05 21:54 ` [PATCH v2 1/3] vfio/type1: Support faulting PFNMAP vmas Alex Williamson 2020-05-07 21:24 ` Peter Xu 2020-05-07 21:47 ` Alex Williamson 2020-05-07 23:54 ` Jason Gunthorpe 2020-05-08 2:19 ` Peter Xu 2020-05-08 12:10 ` Jason Gunthorpe 2020-05-08 14:30 ` Peter Xu 2020-05-08 15:05 ` Jason Gunthorpe 2020-05-08 15:42 ` Alex Williamson 2020-05-08 16:05 ` Peter Xu 2020-05-08 18:39 ` Peter Xu 2020-05-05 21:54 ` [PATCH v2 2/3] vfio-pci: Fault mmaps to enable vma tracking Alex Williamson 2020-05-07 21:47 ` Peter Xu 2020-05-07 22:03 ` Alex Williamson 2020-05-07 22:22 ` Peter Xu 2020-05-07 23:56 ` Jason Gunthorpe 2020-05-08 2:16 ` Peter Xu 2020-05-08 6:44 ` Jason Wang 2020-05-08 14:27 ` Peter Xu 2020-05-08 12:08 ` Jason Gunthorpe 2020-05-08 14:26 ` Peter Xu 2020-05-05 21:55 ` Alex Williamson [this message] 2020-05-22 2:39 ` [PATCH v2 3/3] vfio-pci: Invalidate mmaps and block MMIO access on disabled memory Qian Cai 2020-05-22 4:18 ` Alex Williamson 2020-05-07 21:59 ` [PATCH v2 0/3] vfio-pci: Block user access to disabled device MMIO Peter Xu 2020-05-07 22:34 ` Alex Williamson 2020-05-08 2:31 ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email using any one of the following methods:

* Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox

Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the –to, –cc, and –in-reply-to switches of git-send-email(1):

git send-email \ –in-reply-to=158871570274.15589.10563806532874116326.stgit@gimli.home \ –[email protected] \ –[email protected] \ –[email protected] \ –[email protected] \ –[email protected] \ /path/to/YOUR_REPLY

https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link

Be sure your reply has a Subject: header at the top and a blank line before the message body.

This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).

Related news

CVE-2022-34456: DSA-2022-267: Dell EMC Metronode VS5 Security Update for Multiple Third-Party Component Vulnerabilities

Dell EMC Metro node, Version(s) prior to 7.1, contain a Code Injection Vulnerability. An authenticated nonprivileged attacker could potentially exploit this vulnerability, leading to the execution of arbitrary OS commands on the application.

CVE: Latest News

CVE-2023-50976: Transactions API Authorization by oleiman · Pull Request #14969 · redpanda-data/redpanda
CVE-2023-6905
CVE-2023-6903
CVE-2023-6904
CVE-2023-3907