aboutsummaryrefslogtreecommitdiff
path: root/sys
diff options
context:
space:
mode:
authorScott Long <scottl@FreeBSD.org>2009-07-10 08:18:08 +0000
committerScott Long <scottl@FreeBSD.org>2009-07-10 08:18:08 +0000
commit52c9ce25d8339ad0228be8aaf0e44b45314b38dc (patch)
tree65347229e3752769c4a701bd5f5308b2c8b4bf03 /sys
parentf6c09dd6a8f15f3093d0e4eb226ce6ac0ab1c991 (diff)
downloadsrc-52c9ce25d8339ad0228be8aaf0e44b45314b38dc.tar.gz
src-52c9ce25d8339ad0228be8aaf0e44b45314b38dc.zip
Separate the parallel scsi knowledge out of the core of the XPT, and
modularize it so that new transports can be created. Add a transport for SATA Add a periph+protocol layer for ATA Add a driver for AHCI-compliant hardware. Add a maxio field to CAM so that drivers can advertise their max I/O capability. Modify various drivers so that they are insulated from the value of MAXPHYS. The new ATA/SATA code supports AHCI-compliant hardware, and will override the classic ATA driver if it is loaded as a module at boot time or compiled into the kernel. The stack now support NCQ (tagged queueing) for increased performance on modern SATA drives. It also supports port multipliers. ATA drives are accessed via 'ada' device nodes. ATAPI drives are accessed via 'cd' device nodes. They can all be enumerated and manipulated via camcontrol, just like SCSI drives. SCSI commands are not translated to their ATA equivalents; ATA native commands are used throughout the entire stack, including camcontrol. See the camcontrol manpage for further details. Testing this code may require that you update your fstab, and possibly modify your BIOS to enable AHCI functionality, if available. This code is very experimental at the moment. The userland ABI/API has changed, so applications will need to be recompiled. It may change further in the near future. The 'ada' device name may also change as more infrastructure is completed in this project. The goal is to eventually put all CAM busses and devices until newbus, allowing for interesting topology and management options. Few functional changes will be seen with existing SCSI/SAS/FC drivers, though the userland ABI has still changed. In the future, transports specific modules for SAS and FC may appear in order to better support the topologies and capabilities of these technologies. The modularization of CAM and the addition of the ATA/SATA modules is meant to break CAM out of the mold of being specific to SCSI, letting it grow to be a framework for arbitrary transports and protocols. It also allows drivers to be written to support discrete hardware without jeopardizing the stability of non-related hardware. While only an AHCI driver is provided now, a Silicon Image driver is also in the works. Drivers for ICH1-4, ICH5-6, PIIX, classic IDE, and any other hardware is possible and encouraged. Help with new transports is also encouraged. Submitted by: scottl, mav Approved by: re
Notes
Notes: svn path=/head/; revision=195534
Diffstat (limited to 'sys')
-rw-r--r--sys/cam/ata/ata_all.c304
-rw-r--r--sys/cam/ata/ata_all.h105
-rw-r--r--sys/cam/ata/ata_da.c1144
-rw-r--r--sys/cam/ata/ata_xpt.c1895
-rw-r--r--sys/cam/cam.c2
-rw-r--r--sys/cam/cam.h1
-rw-r--r--sys/cam/cam_ccb.h68
-rw-r--r--sys/cam/cam_periph.c36
-rw-r--r--sys/cam/cam_xpt.c2613
-rw-r--r--sys/cam/cam_xpt.h46
-rw-r--r--sys/cam/cam_xpt_internal.h205
-rw-r--r--sys/cam/cam_xpt_periph.h1
-rw-r--r--sys/cam/scsi/scsi_all.c1
-rw-r--r--sys/cam/scsi/scsi_cd.c3
-rw-r--r--sys/cam/scsi/scsi_ch.c3
-rw-r--r--sys/cam/scsi/scsi_da.c11
-rw-r--r--sys/cam/scsi/scsi_pass.c3
-rw-r--r--sys/cam/scsi/scsi_pt.c3
-rw-r--r--sys/cam/scsi/scsi_sa.c3
-rw-r--r--sys/cam/scsi/scsi_ses.c3
-rw-r--r--sys/cam/scsi/scsi_sg.c3
-rw-r--r--sys/cam/scsi/scsi_xpt.c2382
-rw-r--r--sys/conf/files5
-rw-r--r--sys/dev/advansys/advansys.c2
-rw-r--r--sys/dev/advansys/advlib.h2
-rw-r--r--sys/dev/ahci/ahci.c1858
-rw-r--r--sys/dev/ahci/ahci.h422
-rw-r--r--sys/dev/aic7xxx/aic79xx_osm.h6
-rw-r--r--sys/dev/aic7xxx/aic7xxx_osm.h7
-rw-r--r--sys/dev/amd/amd.h3
-rw-r--r--sys/dev/ata/atapi-cam.c8
-rw-r--r--sys/dev/ciss/ciss.c1
-rw-r--r--sys/dev/ciss/cissvar.h3
-rw-r--r--sys/dev/isp/isp_freebsd.h3
-rw-r--r--sys/dev/mfi/mfi.c2
-rw-r--r--sys/dev/mfi/mfivar.h1
-rw-r--r--sys/dev/mlx/mlx.c2
-rw-r--r--sys/dev/mlx/mlxvar.h1
-rw-r--r--sys/dev/mpt/mpt.h3
-rw-r--r--sys/dev/mpt/mpt_pci.c4
-rw-r--r--sys/dev/trm/trm.h3
-rw-r--r--sys/modules/Makefile1
-rw-r--r--sys/modules/ahci/Makefile8
-rw-r--r--sys/modules/cam/Makefile6
44 files changed, 8690 insertions, 2496 deletions
diff --git a/sys/cam/ata/ata_all.c b/sys/cam/ata/ata_all.c
new file mode 100644
index 000000000000..1e6eecec820c
--- /dev/null
+++ b/sys/cam/ata/ata_all.c
@@ -0,0 +1,304 @@
+/*-
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+
+#ifdef _KERNEL
+#include <opt_scsi.h>
+
+#include <sys/systm.h>
+#include <sys/libkern.h>
+#include <sys/kernel.h>
+#include <sys/sysctl.h>
+#else
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#ifndef min
+#define min(a,b) (((a)<(b))?(a):(b))
+#endif
+#endif
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_queue.h>
+#include <cam/cam_xpt.h>
+#include <sys/ata.h>
+#include <cam/ata/ata_all.h>
+#include <sys/sbuf.h>
+#include <sys/endian.h>
+
+int
+ata_version(int ver)
+{
+ int bit;
+
+ if (ver == 0xffff)
+ return 0;
+ for (bit = 15; bit >= 0; bit--)
+ if (ver & (1<<bit))
+ return bit;
+ return 0;
+}
+
+void
+ata_print_ident(struct ata_params *ident_data)
+{
+ char product[48], revision[16];
+
+ cam_strvis(product, ident_data->model, sizeof(ident_data->model),
+ sizeof(product));
+ cam_strvis(revision, ident_data->revision, sizeof(ident_data->revision),
+ sizeof(revision));
+ printf("<%s %s> ATA/ATAPI-%d",
+ product, revision, ata_version(ident_data->version_major));
+ if (ident_data->satacapabilities && ident_data->satacapabilities != 0xffff) {
+ if (ident_data->satacapabilities & ATA_SATA_GEN2)
+ printf(" SATA 2.x");
+ else if (ident_data->satacapabilities & ATA_SATA_GEN1)
+ printf(" SATA 1.x");
+ else
+ printf(" SATA");
+ }
+ printf(" device\n");
+}
+
+void
+ata_36bit_cmd(struct ccb_ataio *ataio, uint8_t cmd, uint8_t features,
+ uint32_t lba, uint8_t sector_count)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = 0;
+ ataio->cmd.command = cmd;
+ ataio->cmd.features = features;
+ ataio->cmd.lba_low = lba;
+ ataio->cmd.lba_mid = lba >> 8;
+ ataio->cmd.lba_high = lba >> 16;
+ ataio->cmd.device = 0x40 | ((lba >> 24) & 0x0f);
+ ataio->cmd.sector_count = sector_count;
+}
+
+void
+ata_48bit_cmd(struct ccb_ataio *ataio, uint8_t cmd, uint16_t features,
+ uint64_t lba, uint16_t sector_count)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_48BIT;
+ ataio->cmd.command = cmd;
+ ataio->cmd.features = features;
+ ataio->cmd.lba_low = lba;
+ ataio->cmd.lba_mid = lba >> 8;
+ ataio->cmd.lba_high = lba >> 16;
+ ataio->cmd.device = 0x40;
+ ataio->cmd.lba_low_exp = lba >> 24;
+ ataio->cmd.lba_mid_exp = lba >> 32;
+ ataio->cmd.lba_high_exp = lba >> 40;
+ ataio->cmd.features_exp = features >> 8;
+ ataio->cmd.sector_count = sector_count;
+ ataio->cmd.sector_count_exp = sector_count >> 8;
+}
+
+void
+ata_ncq_cmd(struct ccb_ataio *ataio, uint8_t cmd,
+ uint64_t lba, uint16_t sector_count)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_48BIT | CAM_ATAIO_FPDMA;
+ ataio->cmd.command = cmd;
+ ataio->cmd.features = sector_count;
+ ataio->cmd.lba_low = lba;
+ ataio->cmd.lba_mid = lba >> 8;
+ ataio->cmd.lba_high = lba >> 16;
+ ataio->cmd.device = 0x40;
+ ataio->cmd.lba_low_exp = lba >> 24;
+ ataio->cmd.lba_mid_exp = lba >> 32;
+ ataio->cmd.lba_high_exp = lba >> 40;
+ ataio->cmd.features_exp = sector_count >> 8;
+}
+
+void
+ata_reset_cmd(struct ccb_ataio *ataio)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_CONTROL | CAM_ATAIO_NEEDRESULT;
+ ataio->cmd.control = 0x04;
+}
+
+void
+ata_pm_read_cmd(struct ccb_ataio *ataio, int reg, int port)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_48BIT | CAM_ATAIO_NEEDRESULT;
+ ataio->cmd.command = ATA_READ_PM;
+ ataio->cmd.features = reg;
+ ataio->cmd.features_exp = reg >> 8;
+ ataio->cmd.device = port & 0x0f;
+}
+
+void
+ata_pm_write_cmd(struct ccb_ataio *ataio, int reg, int port, uint64_t val)
+{
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_48BIT | CAM_ATAIO_NEEDRESULT;
+ ataio->cmd.command = ATA_WRITE_PM;
+ ataio->cmd.features = reg;
+ ataio->cmd.lba_low = val >> 8;
+ ataio->cmd.lba_mid = val >> 16;
+ ataio->cmd.lba_high = val >> 24;
+ ataio->cmd.device = port & 0x0f;
+ ataio->cmd.lba_low_exp = val >> 40;
+ ataio->cmd.lba_mid_exp = val >> 48;
+ ataio->cmd.lba_high_exp = val >> 56;
+ ataio->cmd.features_exp = reg >> 8;
+ ataio->cmd.sector_count = val;
+ ataio->cmd.sector_count_exp = val >> 32;
+}
+
+void
+ata_bswap(int8_t *buf, int len)
+{
+ u_int16_t *ptr = (u_int16_t*)(buf + len);
+
+ while (--ptr >= (u_int16_t*)buf)
+ *ptr = be16toh(*ptr);
+}
+
+void
+ata_btrim(int8_t *buf, int len)
+{
+ int8_t *ptr;
+
+ for (ptr = buf; ptr < buf+len; ++ptr)
+ if (!*ptr || *ptr == '_')
+ *ptr = ' ';
+ for (ptr = buf + len - 1; ptr >= buf && *ptr == ' '; --ptr)
+ *ptr = 0;
+}
+
+void
+ata_bpack(int8_t *src, int8_t *dst, int len)
+{
+ int i, j, blank;
+
+ for (i = j = blank = 0 ; i < len; i++) {
+ if (blank && src[i] == ' ') continue;
+ if (blank && src[i] != ' ') {
+ dst[j++] = src[i];
+ blank = 0;
+ continue;
+ }
+ if (src[i] == ' ') {
+ blank = 1;
+ if (i == 0)
+ continue;
+ }
+ dst[j++] = src[i];
+ }
+ while (j < len)
+ dst[j++] = 0x00;
+}
+
+int
+ata_max_pmode(struct ata_params *ap)
+{
+ if (ap->atavalid & ATA_FLAG_64_70) {
+ if (ap->apiomodes & 0x02)
+ return ATA_PIO4;
+ if (ap->apiomodes & 0x01)
+ return ATA_PIO3;
+ }
+ if (ap->mwdmamodes & 0x04)
+ return ATA_PIO4;
+ if (ap->mwdmamodes & 0x02)
+ return ATA_PIO3;
+ if (ap->mwdmamodes & 0x01)
+ return ATA_PIO2;
+ if ((ap->retired_piomode & ATA_RETIRED_PIO_MASK) == 0x200)
+ return ATA_PIO2;
+ if ((ap->retired_piomode & ATA_RETIRED_PIO_MASK) == 0x100)
+ return ATA_PIO1;
+ if ((ap->retired_piomode & ATA_RETIRED_PIO_MASK) == 0x000)
+ return ATA_PIO0;
+ return ATA_PIO0;
+}
+
+int
+ata_max_wmode(struct ata_params *ap)
+{
+ if (ap->mwdmamodes & 0x04)
+ return ATA_WDMA2;
+ if (ap->mwdmamodes & 0x02)
+ return ATA_WDMA1;
+ if (ap->mwdmamodes & 0x01)
+ return ATA_WDMA0;
+ return -1;
+}
+
+int
+ata_max_umode(struct ata_params *ap)
+{
+ if (ap->atavalid & ATA_FLAG_88) {
+ if (ap->udmamodes & 0x40)
+ return ATA_UDMA6;
+ if (ap->udmamodes & 0x20)
+ return ATA_UDMA5;
+ if (ap->udmamodes & 0x10)
+ return ATA_UDMA4;
+ if (ap->udmamodes & 0x08)
+ return ATA_UDMA3;
+ if (ap->udmamodes & 0x04)
+ return ATA_UDMA2;
+ if (ap->udmamodes & 0x02)
+ return ATA_UDMA1;
+ if (ap->udmamodes & 0x01)
+ return ATA_UDMA0;
+ }
+ return -1;
+}
+
+int
+ata_max_mode(struct ata_params *ap, int mode, int maxmode)
+{
+
+ if (maxmode && mode > maxmode)
+ mode = maxmode;
+
+ if (mode >= ATA_UDMA0 && ata_max_umode(ap) > 0)
+ return (min(mode, ata_max_umode(ap)));
+
+ if (mode >= ATA_WDMA0 && ata_max_wmode(ap) > 0)
+ return (min(mode, ata_max_wmode(ap)));
+
+ if (mode > ata_max_pmode(ap))
+ return (min(mode, ata_max_pmode(ap)));
+
+ return (mode);
+}
+
diff --git a/sys/cam/ata/ata_all.h b/sys/cam/ata/ata_all.h
new file mode 100644
index 000000000000..60129956db70
--- /dev/null
+++ b/sys/cam/ata/ata_all.h
@@ -0,0 +1,105 @@
+/*-
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $FreeBSD$
+ */
+
+#ifndef CAM_ATA_ALL_H
+#define CAM_ATA_ALL_H 1
+
+#include <sys/ata.h>
+
+struct ccb_ataio;
+struct cam_periph;
+union ccb;
+
+struct ata_cmd {
+ u_int8_t flags; /* ATA command flags */
+#define CAM_ATAIO_48BIT 0x01 /* Command has 48-bit format */
+#define CAM_ATAIO_FPDMA 0x02 /* FPDMA command */
+#define CAM_ATAIO_CONTROL 0x04 /* Control, not a command */
+#define CAM_ATAIO_NEEDRESULT 0x08 /* Request requires result. */
+
+ u_int8_t command;
+ u_int8_t features;
+
+ u_int8_t lba_low;
+ u_int8_t lba_mid;
+ u_int8_t lba_high;
+ u_int8_t device;
+
+ u_int8_t lba_low_exp;
+ u_int8_t lba_mid_exp;
+ u_int8_t lba_high_exp;
+ u_int8_t features_exp;
+
+ u_int8_t sector_count;
+ u_int8_t sector_count_exp;
+ u_int8_t control;
+};
+
+struct ata_res {
+ u_int8_t flags; /* ATA command flags */
+#define CAM_ATAIO_48BIT 0x01 /* Command has 48-bit format */
+
+ u_int8_t status;
+ u_int8_t error;
+
+ u_int8_t lba_low;
+ u_int8_t lba_mid;
+ u_int8_t lba_high;
+ u_int8_t device;
+
+ u_int8_t lba_low_exp;
+ u_int8_t lba_mid_exp;
+ u_int8_t lba_high_exp;
+
+ u_int8_t sector_count;
+ u_int8_t sector_count_exp;
+};
+
+int ata_version(int ver);
+void ata_print_ident(struct ata_params *ident_data);
+
+void ata_36bit_cmd(struct ccb_ataio *ataio, uint8_t cmd, uint8_t features,
+ uint32_t lba, uint8_t sector_count);
+void ata_48bit_cmd(struct ccb_ataio *ataio, uint8_t cmd, uint16_t features,
+ uint64_t lba, uint16_t sector_count);
+void ata_ncq_cmd(struct ccb_ataio *ataio, uint8_t cmd,
+ uint64_t lba, uint16_t sector_count);
+void ata_reset_cmd(struct ccb_ataio *ataio);
+void ata_pm_read_cmd(struct ccb_ataio *ataio, int reg, int port);
+void ata_pm_write_cmd(struct ccb_ataio *ataio, int reg, int port, uint64_t val);
+
+void ata_bswap(int8_t *buf, int len);
+void ata_btrim(int8_t *buf, int len);
+void ata_bpack(int8_t *src, int8_t *dst, int len);
+
+int ata_max_pmode(struct ata_params *ap);
+int ata_max_wmode(struct ata_params *ap);
+int ata_max_umode(struct ata_params *ap);
+int ata_max_mode(struct ata_params *ap, int mode, int maxmode);
+
+#endif
diff --git a/sys/cam/ata/ata_da.c b/sys/cam/ata/ata_da.c
new file mode 100644
index 000000000000..b72c316dea4d
--- /dev/null
+++ b/sys/cam/ata/ata_da.c
@@ -0,0 +1,1144 @@
+/*-
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+
+#ifdef _KERNEL
+#include <sys/systm.h>
+#include <sys/kernel.h>
+#include <sys/bio.h>
+#include <sys/sysctl.h>
+#include <sys/taskqueue.h>
+#include <sys/lock.h>
+#include <sys/mutex.h>
+#include <sys/conf.h>
+#include <sys/devicestat.h>
+#include <sys/eventhandler.h>
+#include <sys/malloc.h>
+#include <sys/cons.h>
+#include <geom/geom_disk.h>
+#endif /* _KERNEL */
+
+#ifndef _KERNEL
+#include <stdio.h>
+#include <string.h>
+#endif /* _KERNEL */
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_periph.h>
+#include <cam/cam_xpt_periph.h>
+#include <cam/cam_sim.h>
+
+#include <cam/ata/ata_all.h>
+
+#ifdef _KERNEL
+
+#define ATA_MAX_28BIT_LBA 268435455UL
+
+typedef enum {
+ ADA_STATE_NORMAL
+} ada_state;
+
+typedef enum {
+ ADA_FLAG_PACK_INVALID = 0x001,
+ ADA_FLAG_CAN_48BIT = 0x002,
+ ADA_FLAG_CAN_FLUSHCACHE = 0x004,
+ ADA_FLAG_CAN_NCQ = 0x008,
+ ADA_FLAG_TAGGED_QUEUING = 0x010,
+ ADA_FLAG_NEED_OTAG = 0x020,
+ ADA_FLAG_WENT_IDLE = 0x040,
+ ADA_FLAG_RETRY_UA = 0x080,
+ ADA_FLAG_OPEN = 0x100,
+ ADA_FLAG_SCTX_INIT = 0x200
+} ada_flags;
+
+typedef enum {
+ ADA_Q_NONE = 0x00,
+ ADA_Q_NO_SYNC_CACHE = 0x01,
+ ADA_Q_NO_6_BYTE = 0x02,
+ ADA_Q_NO_PREVENT = 0x04
+} ada_quirks;
+
+typedef enum {
+ ADA_CCB_PROBE = 0x01,
+ ADA_CCB_PROBE2 = 0x02,
+ ADA_CCB_BUFFER_IO = 0x03,
+ ADA_CCB_WAITING = 0x04,
+ ADA_CCB_DUMP = 0x05,
+ ADA_CCB_TYPE_MASK = 0x0F,
+ ADA_CCB_RETRY_UA = 0x10
+} ada_ccb_state;
+
+/* Offsets into our private area for storing information */
+#define ccb_state ppriv_field0
+#define ccb_bp ppriv_ptr1
+
+struct disk_params {
+ u_int8_t heads;
+ u_int32_t cylinders;
+ u_int8_t secs_per_track;
+ u_int32_t secsize; /* Number of bytes/sector */
+ u_int64_t sectors; /* total number sectors */
+};
+
+struct ada_softc {
+ struct bio_queue_head bio_queue;
+ SLIST_ENTRY(ada_softc) links;
+ LIST_HEAD(, ccb_hdr) pending_ccbs;
+ ada_state state;
+ ada_flags flags;
+ ada_quirks quirks;
+ int ordered_tag_count;
+ int outstanding_cmds;
+ struct disk_params params;
+ struct disk *disk;
+ union ccb saved_ccb;
+ struct task sysctl_task;
+ struct sysctl_ctx_list sysctl_ctx;
+ struct sysctl_oid *sysctl_tree;
+ struct callout sendordered_c;
+};
+
+struct ada_quirk_entry {
+ struct scsi_inquiry_pattern inq_pat;
+ ada_quirks quirks;
+};
+
+//static struct ada_quirk_entry ada_quirk_table[] =
+//{
+//};
+
+static disk_strategy_t adastrategy;
+static dumper_t adadump;
+static periph_init_t adainit;
+static void adaasync(void *callback_arg, u_int32_t code,
+ struct cam_path *path, void *arg);
+static void adasysctlinit(void *context, int pending);
+static periph_ctor_t adaregister;
+static periph_dtor_t adacleanup;
+static periph_start_t adastart;
+static periph_oninv_t adaoninvalidate;
+static void adadone(struct cam_periph *periph,
+ union ccb *done_ccb);
+static int adaerror(union ccb *ccb, u_int32_t cam_flags,
+ u_int32_t sense_flags);
+static void adasetgeom(struct cam_periph *periph,
+ struct ccb_getdev *cgd);
+static timeout_t adasendorderedtag;
+static void adashutdown(void *arg, int howto);
+
+#ifndef ADA_DEFAULT_TIMEOUT
+#define ADA_DEFAULT_TIMEOUT 30 /* Timeout in seconds */
+#endif
+
+#ifndef ADA_DEFAULT_RETRY
+#define ADA_DEFAULT_RETRY 4
+#endif
+
+#ifndef ADA_DEFAULT_SEND_ORDERED
+#define ADA_DEFAULT_SEND_ORDERED 1
+#endif
+
+
+static int ada_retry_count = ADA_DEFAULT_RETRY;
+static int ada_default_timeout = ADA_DEFAULT_TIMEOUT;
+static int ada_send_ordered = ADA_DEFAULT_SEND_ORDERED;
+
+SYSCTL_NODE(_kern_cam, OID_AUTO, ada, CTLFLAG_RD, 0,
+ "CAM Direct Access Disk driver");
+SYSCTL_INT(_kern_cam_ada, OID_AUTO, retry_count, CTLFLAG_RW,
+ &ada_retry_count, 0, "Normal I/O retry count");
+TUNABLE_INT("kern.cam.ada.retry_count", &ada_retry_count);
+SYSCTL_INT(_kern_cam_ada, OID_AUTO, default_timeout, CTLFLAG_RW,
+ &ada_default_timeout, 0, "Normal I/O timeout (in seconds)");
+TUNABLE_INT("kern.cam.ada.default_timeout", &ada_default_timeout);
+SYSCTL_INT(_kern_cam_ada, OID_AUTO, ada_send_ordered, CTLFLAG_RW,
+ &ada_send_ordered, 0, "Send Ordered Tags");
+TUNABLE_INT("kern.cam.ada.ada_send_ordered", &ada_send_ordered);
+
+/*
+ * ADA_ORDEREDTAG_INTERVAL determines how often, relative
+ * to the default timeout, we check to see whether an ordered
+ * tagged transaction is appropriate to prevent simple tag
+ * starvation. Since we'd like to ensure that there is at least
+ * 1/2 of the timeout length left for a starved transaction to
+ * complete after we've sent an ordered tag, we must poll at least
+ * four times in every timeout period. This takes care of the worst
+ * case where a starved transaction starts during an interval that
+ * meets the requirement "don't send an ordered tag" test so it takes
+ * us two intervals to determine that a tag must be sent.
+ */
+#ifndef ADA_ORDEREDTAG_INTERVAL
+#define ADA_ORDEREDTAG_INTERVAL 4
+#endif
+
+static struct periph_driver adadriver =
+{
+ adainit, "ada",
+ TAILQ_HEAD_INITIALIZER(adadriver.units), /* generation */ 0
+};
+
+PERIPHDRIVER_DECLARE(ada, adadriver);
+
+MALLOC_DEFINE(M_ATADA, "ata_da", "ata_da buffers");
+
+static int
+adaopen(struct disk *dp)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+ int unit;
+ int error;
+
+ periph = (struct cam_periph *)dp->d_drv1;
+ if (periph == NULL) {
+ return (ENXIO);
+ }
+
+ if (cam_periph_acquire(periph) != CAM_REQ_CMP) {
+ return(ENXIO);
+ }
+
+ cam_periph_lock(periph);
+ if ((error = cam_periph_hold(periph, PRIBIO|PCATCH)) != 0) {
+ cam_periph_unlock(periph);
+ cam_periph_release(periph);
+ return (error);
+ }
+
+ unit = periph->unit_number;
+ softc = (struct ada_softc *)periph->softc;
+ softc->flags |= ADA_FLAG_OPEN;
+
+ CAM_DEBUG(periph->path, CAM_DEBUG_TRACE,
+ ("adaopen: disk=%s%d (unit %d)\n", dp->d_name, dp->d_unit,
+ unit));
+
+ if ((softc->flags & ADA_FLAG_PACK_INVALID) != 0) {
+ /* Invalidate our pack information. */
+ softc->flags &= ~ADA_FLAG_PACK_INVALID;
+ }
+
+ cam_periph_unhold(periph);
+ cam_periph_unlock(periph);
+ return (0);
+}
+
+static int
+adaclose(struct disk *dp)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+ union ccb *ccb;
+ int error;
+
+ periph = (struct cam_periph *)dp->d_drv1;
+ if (periph == NULL)
+ return (ENXIO);
+
+ cam_periph_lock(periph);
+ if ((error = cam_periph_hold(periph, PRIBIO)) != 0) {
+ cam_periph_unlock(periph);
+ cam_periph_release(periph);
+ return (error);
+ }
+
+ softc = (struct ada_softc *)periph->softc;
+ /* We only sync the cache if the drive is capable of it. */
+ if (softc->flags & ADA_FLAG_CAN_FLUSHCACHE) {
+
+ ccb = cam_periph_getccb(periph, /*priority*/1);
+ ccb->ccb_h.ccb_state = ADA_CCB_DUMP;
+ cam_fill_ataio(&ccb->ataio,
+ 1,
+ adadone,
+ CAM_DIR_NONE,
+ 0,
+ NULL,
+ 0,
+ ada_default_timeout*1000);
+
+ if (softc->flags & ADA_FLAG_CAN_48BIT)
+ ata_48bit_cmd(&ccb->ataio, ATA_FLUSHCACHE48, 0, 0, 0);
+ else
+ ata_48bit_cmd(&ccb->ataio, ATA_FLUSHCACHE, 0, 0, 0);
+ xpt_polled_action(ccb);
+
+ if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP)
+ xpt_print(periph->path, "Synchronize cache failed\n");
+
+ if ((ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
+ cam_release_devq(ccb->ccb_h.path,
+ /*relsim_flags*/0,
+ /*reduction*/0,
+ /*timeout*/0,
+ /*getcount_only*/0);
+ xpt_release_ccb(ccb);
+ }
+
+ softc->flags &= ~ADA_FLAG_OPEN;
+ cam_periph_unhold(periph);
+ cam_periph_unlock(periph);
+ cam_periph_release(periph);
+ return (0);
+}
+
+/*
+ * Actually translate the requested transfer into one the physical driver
+ * can understand. The transfer is described by a buf and will include
+ * only one physical transfer.
+ */
+static void
+adastrategy(struct bio *bp)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+
+ periph = (struct cam_periph *)bp->bio_disk->d_drv1;
+ if (periph == NULL) {
+ biofinish(bp, NULL, ENXIO);
+ return;
+ }
+ softc = (struct ada_softc *)periph->softc;
+
+ cam_periph_lock(periph);
+
+#if 0
+ /*
+ * check it's not too big a transfer for our adapter
+ */
+ scsi_minphys(bp,&sd_switch);
+#endif
+
+ /*
+ * Mask interrupts so that the pack cannot be invalidated until
+ * after we are in the queue. Otherwise, we might not properly
+ * clean up one of the buffers.
+ */
+
+ /*
+ * If the device has been made invalid, error out
+ */
+ if ((softc->flags & ADA_FLAG_PACK_INVALID)) {
+ cam_periph_unlock(periph);
+ biofinish(bp, NULL, ENXIO);
+ return;
+ }
+
+ /*
+ * Place it in the queue of disk activities for this disk
+ */
+ bioq_disksort(&softc->bio_queue, bp);
+
+ /*
+ * Schedule ourselves for performing the work.
+ */
+ xpt_schedule(periph, /* XXX priority */1);
+ cam_periph_unlock(periph);
+
+ return;
+}
+
+static int
+adadump(void *arg, void *virtual, vm_offset_t physical, off_t offset, size_t length)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+ u_int secsize;
+ union ccb ccb;
+ struct disk *dp;
+ uint64_t lba;
+ uint16_t count;
+
+ dp = arg;
+ periph = dp->d_drv1;
+ if (periph == NULL)
+ return (ENXIO);
+ softc = (struct ada_softc *)periph->softc;
+ cam_periph_lock(periph);
+ secsize = softc->params.secsize;
+ lba = offset / secsize;
+ count = length / secsize;
+
+ if ((softc->flags & ADA_FLAG_PACK_INVALID) != 0) {
+ cam_periph_unlock(periph);
+ return (ENXIO);
+ }
+
+ if (length > 0) {
+ periph->flags |= CAM_PERIPH_POLLED;
+ xpt_setup_ccb(&ccb.ccb_h, periph->path, /*priority*/1);
+ ccb.ccb_h.ccb_state = ADA_CCB_DUMP;
+ cam_fill_ataio(&ccb.ataio,
+ 0,
+ adadone,
+ CAM_DIR_OUT,
+ 0,
+ (u_int8_t *) virtual,
+ length,
+ ada_default_timeout*1000);
+ if ((softc->flags & ADA_FLAG_CAN_48BIT) &&
+ (lba + count >= ATA_MAX_28BIT_LBA ||
+ count >= 256)) {
+ ata_48bit_cmd(&ccb.ataio, ATA_WRITE_DMA48,
+ 0, lba, count);
+ } else {
+ ata_36bit_cmd(&ccb.ataio, ATA_WRITE_DMA,
+ 0, lba, count);
+ }
+ xpt_polled_action(&ccb);
+
+ if ((ccb.ataio.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ printf("Aborting dump due to I/O error.\n");
+ cam_periph_unlock(periph);
+ return(EIO);
+ }
+ cam_periph_unlock(periph);
+ return(0);
+ }
+
+ if (softc->flags & ADA_FLAG_CAN_FLUSHCACHE) {
+ xpt_setup_ccb(&ccb.ccb_h, periph->path, /*priority*/1);
+
+ ccb.ccb_h.ccb_state = ADA_CCB_DUMP;
+ cam_fill_ataio(&ccb.ataio,
+ 1,
+ adadone,
+ CAM_DIR_NONE,
+ 0,
+ NULL,
+ 0,
+ ada_default_timeout*1000);
+
+ if (softc->flags & ADA_FLAG_CAN_48BIT)
+ ata_48bit_cmd(&ccb.ataio, ATA_FLUSHCACHE48, 0, 0, 0);
+ else
+ ata_48bit_cmd(&ccb.ataio, ATA_FLUSHCACHE, 0, 0, 0);
+ xpt_polled_action(&ccb);
+
+ if ((ccb.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP)
+ xpt_print(periph->path, "Synchronize cache failed\n");
+
+ if ((ccb.ccb_h.status & CAM_DEV_QFRZN) != 0)
+ cam_release_devq(ccb.ccb_h.path,
+ /*relsim_flags*/0,
+ /*reduction*/0,
+ /*timeout*/0,
+ /*getcount_only*/0);
+ }
+ periph->flags &= ~CAM_PERIPH_POLLED;
+ cam_periph_unlock(periph);
+ return (0);
+}
+
+static void
+adainit(void)
+{
+ cam_status status;
+
+ /*
+ * Install a global async callback. This callback will
+ * receive async callbacks like "new device found".
+ */
+ status = xpt_register_async(AC_FOUND_DEVICE, adaasync, NULL, NULL);
+
+ if (status != CAM_REQ_CMP) {
+ printf("ada: Failed to attach master async callback "
+ "due to status 0x%x!\n", status);
+ } else if (ada_send_ordered) {
+
+ /* Register our shutdown event handler */
+ if ((EVENTHANDLER_REGISTER(shutdown_post_sync, adashutdown,
+ NULL, SHUTDOWN_PRI_DEFAULT)) == NULL)
+ printf("adainit: shutdown event registration failed!\n");
+ }
+}
+
+static void
+adaoninvalidate(struct cam_periph *periph)
+{
+ struct ada_softc *softc;
+
+ softc = (struct ada_softc *)periph->softc;
+
+ /*
+ * De-register any async callbacks.
+ */
+ xpt_register_async(0, adaasync, periph, periph->path);
+
+ softc->flags |= ADA_FLAG_PACK_INVALID;
+
+ /*
+ * Return all queued I/O with ENXIO.
+ * XXX Handle any transactions queued to the card
+ * with XPT_ABORT_CCB.
+ */
+ bioq_flush(&softc->bio_queue, NULL, ENXIO);
+
+ disk_gone(softc->disk);
+ xpt_print(periph->path, "lost device\n");
+}
+
+static void
+adacleanup(struct cam_periph *periph)
+{
+ struct ada_softc *softc;
+
+ softc = (struct ada_softc *)periph->softc;
+
+ xpt_print(periph->path, "removing device entry\n");
+ cam_periph_unlock(periph);
+
+ /*
+ * If we can't free the sysctl tree, oh well...
+ */
+ if ((softc->flags & ADA_FLAG_SCTX_INIT) != 0
+ && sysctl_ctx_free(&softc->sysctl_ctx) != 0) {
+ xpt_print(periph->path, "can't remove sysctl context\n");
+ }
+
+ disk_destroy(softc->disk);
+ callout_drain(&softc->sendordered_c);
+ free(softc, M_DEVBUF);
+ cam_periph_lock(periph);
+}
+
+static void
+adaasync(void *callback_arg, u_int32_t code,
+ struct cam_path *path, void *arg)
+{
+ struct cam_periph *periph;
+
+ periph = (struct cam_periph *)callback_arg;
+ switch (code) {
+ case AC_FOUND_DEVICE:
+ {
+ struct ccb_getdev *cgd;
+ cam_status status;
+
+ cgd = (struct ccb_getdev *)arg;
+ if (cgd == NULL)
+ break;
+
+ if (cgd->protocol != PROTO_ATA)
+ break;
+
+// if (SID_TYPE(&cgd->inq_data) != T_DIRECT
+// && SID_TYPE(&cgd->inq_data) != T_RBC
+// && SID_TYPE(&cgd->inq_data) != T_OPTICAL)
+// break;
+
+ /*
+ * Allocate a peripheral instance for
+ * this device and start the probe
+ * process.
+ */
+ status = cam_periph_alloc(adaregister, adaoninvalidate,
+ adacleanup, adastart,
+ "ada", CAM_PERIPH_BIO,
+ cgd->ccb_h.path, adaasync,
+ AC_FOUND_DEVICE, cgd);
+
+ if (status != CAM_REQ_CMP
+ && status != CAM_REQ_INPROG)
+ printf("adaasync: Unable to attach to new device "
+ "due to status 0x%x\n", status);
+ break;
+ }
+ case AC_SENT_BDR:
+ case AC_BUS_RESET:
+ {
+ struct ada_softc *softc;
+ struct ccb_hdr *ccbh;
+
+ softc = (struct ada_softc *)periph->softc;
+ /*
+ * Don't fail on the expected unit attention
+ * that will occur.
+ */
+ softc->flags |= ADA_FLAG_RETRY_UA;
+ LIST_FOREACH(ccbh, &softc->pending_ccbs, periph_links.le)
+ ccbh->ccb_state |= ADA_CCB_RETRY_UA;
+ /* FALLTHROUGH*/
+ }
+ default:
+ cam_periph_async(periph, code, path, arg);
+ break;
+ }
+}
+
+static void
+adasysctlinit(void *context, int pending)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+ char tmpstr[80], tmpstr2[80];
+
+ periph = (struct cam_periph *)context;
+ if (cam_periph_acquire(periph) != CAM_REQ_CMP)
+ return;
+
+ softc = (struct ada_softc *)periph->softc;
+ snprintf(tmpstr, sizeof(tmpstr), "CAM ADA unit %d", periph->unit_number);
+ snprintf(tmpstr2, sizeof(tmpstr2), "%d", periph->unit_number);
+
+ sysctl_ctx_init(&softc->sysctl_ctx);
+ softc->flags |= ADA_FLAG_SCTX_INIT;
+ softc->sysctl_tree = SYSCTL_ADD_NODE(&softc->sysctl_ctx,
+ SYSCTL_STATIC_CHILDREN(_kern_cam_ada), OID_AUTO, tmpstr2,
+ CTLFLAG_RD, 0, tmpstr);
+ if (softc->sysctl_tree == NULL) {
+ printf("adasysctlinit: unable to allocate sysctl tree\n");
+ cam_periph_release(periph);
+ return;
+ }
+
+ cam_periph_release(periph);
+}
+
+static cam_status
+adaregister(struct cam_periph *periph, void *arg)
+{
+ struct ada_softc *softc;
+ struct ccb_pathinq cpi;
+ struct ccb_getdev *cgd;
+ char announce_buf[80];
+ struct disk_params *dp;
+ caddr_t match;
+ u_int maxio;
+
+ cgd = (struct ccb_getdev *)arg;
+ if (periph == NULL) {
+ printf("adaregister: periph was NULL!!\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ if (cgd == NULL) {
+ printf("adaregister: no getdev CCB, can't register device\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ softc = (struct ada_softc *)malloc(sizeof(*softc), M_DEVBUF,
+ M_NOWAIT|M_ZERO);
+
+ if (softc == NULL) {
+ printf("adaregister: Unable to probe new device. "
+ "Unable to allocate softc\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ LIST_INIT(&softc->pending_ccbs);
+ softc->state = ADA_STATE_NORMAL;
+ bioq_init(&softc->bio_queue);
+
+ if (cgd->ident_data.support.command2 & ATA_SUPPORT_ADDRESS48)
+ softc->flags |= ADA_FLAG_CAN_48BIT;
+ if (cgd->ident_data.support.command2 & ATA_SUPPORT_FLUSHCACHE)
+ softc->flags |= ADA_FLAG_CAN_FLUSHCACHE;
+ if (cgd->ident_data.satacapabilities & ATA_SUPPORT_NCQ &&
+ cgd->ident_data.queue >= 31)
+ softc->flags |= ADA_FLAG_CAN_NCQ;
+// if ((cgd->inq_data.flags & SID_CmdQue) != 0)
+// softc->flags |= ADA_FLAG_TAGGED_QUEUING;
+
+ periph->softc = softc;
+
+ /*
+ * See if this device has any quirks.
+ */
+// match = cam_quirkmatch((caddr_t)&cgd->inq_data,
+// (caddr_t)ada_quirk_table,
+// sizeof(ada_quirk_table)/sizeof(*ada_quirk_table),
+// sizeof(*ada_quirk_table), scsi_inquiry_match);
+ match = NULL;
+
+ if (match != NULL)
+ softc->quirks = ((struct ada_quirk_entry *)match)->quirks;
+ else
+ softc->quirks = ADA_Q_NONE;
+
+ /* Check if the SIM does not want queued commands */
+ bzero(&cpi, sizeof(cpi));
+ xpt_setup_ccb(&cpi.ccb_h, periph->path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+ if (cpi.ccb_h.status != CAM_REQ_CMP ||
+ (cpi.hba_inquiry & PI_TAG_ABLE) == 0)
+ softc->flags &= ~ADA_FLAG_CAN_NCQ;
+
+ TASK_INIT(&softc->sysctl_task, 0, adasysctlinit, periph);
+
+ /*
+ * Register this media as a disk
+ */
+ mtx_unlock(periph->sim->mtx);
+ softc->disk = disk_alloc();
+ softc->disk->d_open = adaopen;
+ softc->disk->d_close = adaclose;
+ softc->disk->d_strategy = adastrategy;
+ softc->disk->d_dump = adadump;
+ softc->disk->d_name = "ada";
+ softc->disk->d_drv1 = periph;
+ maxio = cpi.maxio; /* Honor max I/O size of SIM */
+ if (maxio == 0)
+ maxio = DFLTPHYS; /* traditional default */
+ else if (maxio > MAXPHYS)
+ maxio = MAXPHYS; /* for safety */
+ if (cgd->ident_data.support.command2 & ATA_SUPPORT_ADDRESS48)
+ maxio = min(maxio, 65535 * 512);
+ else /* 28bit ATA command limit */
+ maxio = min(maxio, 255 * 512);
+ softc->disk->d_maxsize = maxio;
+ softc->disk->d_unit = periph->unit_number;
+ softc->disk->d_flags = 0;
+ if (softc->flags & ADA_FLAG_CAN_FLUSHCACHE)
+ softc->disk->d_flags |= DISKFLAG_CANFLUSHCACHE;
+
+ adasetgeom(periph, cgd);
+ softc->disk->d_sectorsize = softc->params.secsize;
+ softc->disk->d_mediasize = softc->params.secsize * (off_t)softc->params.sectors;
+ /* XXX: these are not actually "firmware" values, so they may be wrong */
+ softc->disk->d_fwsectors = softc->params.secs_per_track;
+ softc->disk->d_fwheads = softc->params.heads;
+// softc->disk->d_devstat->block_size = softc->params.secsize;
+// softc->disk->d_devstat->flags &= ~DEVSTAT_BS_UNAVAILABLE;
+
+ disk_create(softc->disk, DISK_VERSION);
+ mtx_lock(periph->sim->mtx);
+
+ dp = &softc->params;
+ snprintf(announce_buf, sizeof(announce_buf),
+ "%juMB (%ju %u byte sectors: %dH %dS/T %dC)",
+ (uintmax_t)(((uintmax_t)dp->secsize *
+ dp->sectors) / (1024*1024)),
+ (uintmax_t)dp->sectors,
+ dp->secsize, dp->heads,
+ dp->secs_per_track, dp->cylinders);
+ xpt_announce_periph(periph, announce_buf);
+ if (softc->flags & ADA_FLAG_CAN_NCQ) {
+ printf("%s%d: Native Command Queueing enabled\n",
+ periph->periph_name, periph->unit_number);
+ }
+
+ /*
+ * Add async callbacks for bus reset and
+ * bus device reset calls. I don't bother
+ * checking if this fails as, in most cases,
+ * the system will function just fine without
+ * them and the only alternative would be to
+ * not attach the device on failure.
+ */
+ xpt_register_async(AC_SENT_BDR | AC_BUS_RESET | AC_LOST_DEVICE,
+ adaasync, periph, periph->path);
+
+ /*
+ * Take an exclusive refcount on the periph while adastart is called
+ * to finish the probe. The reference will be dropped in adadone at
+ * the end of probe.
+ */
+// (void)cam_periph_hold(periph, PRIBIO);
+// xpt_schedule(periph, /*priority*/5);
+
+ /*
+ * Schedule a periodic event to occasionally send an
+ * ordered tag to a device.
+ */
+ callout_init_mtx(&softc->sendordered_c, periph->sim->mtx, 0);
+ callout_reset(&softc->sendordered_c,
+ (ADA_DEFAULT_TIMEOUT * hz) / ADA_ORDEREDTAG_INTERVAL,
+ adasendorderedtag, softc);
+
+ return(CAM_REQ_CMP);
+}
+
+static void
+adastart(struct cam_periph *periph, union ccb *start_ccb)
+{
+ struct ada_softc *softc;
+
+ softc = (struct ada_softc *)periph->softc;
+
+ switch (softc->state) {
+ case ADA_STATE_NORMAL:
+ {
+ /* Pull a buffer from the queue and get going on it */
+ struct bio *bp;
+
+ /*
+ * See if there is a buf with work for us to do..
+ */
+ bp = bioq_first(&softc->bio_queue);
+ if (periph->immediate_priority <= periph->pinfo.priority) {
+ CAM_DEBUG_PRINT(CAM_DEBUG_SUBTRACE,
+ ("queuing for immediate ccb\n"));
+ start_ccb->ccb_h.ccb_state = ADA_CCB_WAITING;
+ SLIST_INSERT_HEAD(&periph->ccb_list, &start_ccb->ccb_h,
+ periph_links.sle);
+ periph->immediate_priority = CAM_PRIORITY_NONE;
+ wakeup(&periph->ccb_list);
+ } else if (bp == NULL) {
+ xpt_release_ccb(start_ccb);
+ } else {
+ struct ccb_ataio *ataio = &start_ccb->ataio;
+ u_int8_t tag_code;
+
+ bioq_remove(&softc->bio_queue, bp);
+
+ if ((softc->flags & ADA_FLAG_NEED_OTAG) != 0) {
+ softc->flags &= ~ADA_FLAG_NEED_OTAG;
+ softc->ordered_tag_count++;
+ tag_code = 0;//MSG_ORDERED_Q_TAG;
+ } else {
+ tag_code = 0;//MSG_SIMPLE_Q_TAG;
+ }
+ switch (bp->bio_cmd) {
+ case BIO_READ:
+ case BIO_WRITE:
+ {
+ uint64_t lba = bp->bio_pblkno;
+ uint16_t count = bp->bio_bcount / softc->params.secsize;
+
+ cam_fill_ataio(ataio,
+ ada_retry_count,
+ adadone,
+ bp->bio_cmd == BIO_READ ?
+ CAM_DIR_IN : CAM_DIR_OUT,
+ tag_code,
+ bp->bio_data,
+ bp->bio_bcount,
+ ada_default_timeout*1000);
+
+ if (softc->flags & ADA_FLAG_CAN_NCQ) {
+ if (bp->bio_cmd == BIO_READ) {
+ ata_ncq_cmd(ataio, ATA_READ_FPDMA_QUEUED,
+ lba, count);
+ } else {
+ ata_ncq_cmd(ataio, ATA_WRITE_FPDMA_QUEUED,
+ lba, count);
+ }
+ } else if ((softc->flags & ADA_FLAG_CAN_48BIT) &&
+ (lba + count >= ATA_MAX_28BIT_LBA ||
+ count >= 256)) {
+ if (bp->bio_cmd == BIO_READ) {
+ ata_48bit_cmd(ataio, ATA_READ_DMA48,
+ 0, lba, count);
+ } else {
+ ata_48bit_cmd(ataio, ATA_WRITE_DMA48,
+ 0, lba, count);
+ }
+ } else {
+ if (bp->bio_cmd == BIO_READ) {
+ ata_36bit_cmd(ataio, ATA_READ_DMA,
+ 0, lba, count);
+ } else {
+ ata_36bit_cmd(ataio, ATA_WRITE_DMA,
+ 0, lba, count);
+ }
+ }
+ }
+ break;
+ case BIO_FLUSH:
+ cam_fill_ataio(ataio,
+ 1,
+ adadone,
+ CAM_DIR_NONE,
+ tag_code,
+ NULL,
+ 0,
+ ada_default_timeout*1000);
+
+ if (softc->flags & ADA_FLAG_CAN_48BIT)
+ ata_48bit_cmd(ataio, ATA_FLUSHCACHE48, 0, 0, 0);
+ else
+ ata_48bit_cmd(ataio, ATA_FLUSHCACHE, 0, 0, 0);
+ break;
+ }
+ start_ccb->ccb_h.ccb_state = ADA_CCB_BUFFER_IO;
+
+ /*
+ * Block out any asyncronous callbacks
+ * while we touch the pending ccb list.
+ */
+ LIST_INSERT_HEAD(&softc->pending_ccbs,
+ &start_ccb->ccb_h, periph_links.le);
+ softc->outstanding_cmds++;
+
+ /* We expect a unit attention from this device */
+ if ((softc->flags & ADA_FLAG_RETRY_UA) != 0) {
+ start_ccb->ccb_h.ccb_state |= ADA_CCB_RETRY_UA;
+ softc->flags &= ~ADA_FLAG_RETRY_UA;
+ }
+
+ start_ccb->ccb_h.ccb_bp = bp;
+ bp = bioq_first(&softc->bio_queue);
+
+ xpt_action(start_ccb);
+ }
+
+ if (bp != NULL) {
+ /* Have more work to do, so ensure we stay scheduled */
+ xpt_schedule(periph, /* XXX priority */1);
+ }
+ break;
+ }
+ }
+}
+
+static void
+adadone(struct cam_periph *periph, union ccb *done_ccb)
+{
+ struct ada_softc *softc;
+ struct ccb_ataio *ataio;
+
+ softc = (struct ada_softc *)periph->softc;
+ ataio = &done_ccb->ataio;
+ switch (ataio->ccb_h.ccb_state & ADA_CCB_TYPE_MASK) {
+ case ADA_CCB_BUFFER_IO:
+ {
+ struct bio *bp;
+
+ bp = (struct bio *)done_ccb->ccb_h.ccb_bp;
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ int error;
+
+ error = adaerror(done_ccb, CAM_RETRY_SELTO, 0);
+ if (error == ERESTART) {
+ /*
+ * A retry was scheuled, so
+ * just return.
+ */
+ return;
+ }
+ if (error != 0) {
+
+ if (error == ENXIO) {
+ /*
+ * Catastrophic error. Mark our pack as
+ * invalid.
+ */
+ /*
+ * XXX See if this is really a media
+ * XXX change first?
+ */
+ xpt_print(periph->path,
+ "Invalidating pack\n");
+ softc->flags |= ADA_FLAG_PACK_INVALID;
+ }
+
+ /*
+ * return all queued I/O with EIO, so that
+ * the client can retry these I/Os in the
+ * proper order should it attempt to recover.
+ */
+ bioq_flush(&softc->bio_queue, NULL, EIO);
+ bp->bio_error = error;
+ bp->bio_resid = bp->bio_bcount;
+ bp->bio_flags |= BIO_ERROR;
+ } else {
+ bp->bio_resid = ataio->resid;
+ bp->bio_error = 0;
+ if (bp->bio_resid != 0)
+ bp->bio_flags |= BIO_ERROR;
+ }
+ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
+ cam_release_devq(done_ccb->ccb_h.path,
+ /*relsim_flags*/0,
+ /*reduction*/0,
+ /*timeout*/0,
+ /*getcount_only*/0);
+ } else {
+ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
+ panic("REQ_CMP with QFRZN");
+ bp->bio_resid = ataio->resid;
+ if (ataio->resid > 0)
+ bp->bio_flags |= BIO_ERROR;
+ }
+
+ /*
+ * Block out any asyncronous callbacks
+ * while we touch the pending ccb list.
+ */
+ LIST_REMOVE(&done_ccb->ccb_h, periph_links.le);
+ softc->outstanding_cmds--;
+ if (softc->outstanding_cmds == 0)
+ softc->flags |= ADA_FLAG_WENT_IDLE;
+
+ biodone(bp);
+ break;
+ }
+ case ADA_CCB_WAITING:
+ {
+ /* Caller will release the CCB */
+ wakeup(&done_ccb->ccb_h.cbfcnp);
+ return;
+ }
+ case ADA_CCB_DUMP:
+ /* No-op. We're polling */
+ return;
+ default:
+ break;
+ }
+ xpt_release_ccb(done_ccb);
+}
+
+static int
+adaerror(union ccb *ccb, u_int32_t cam_flags, u_int32_t sense_flags)
+{
+ struct ada_softc *softc;
+ struct cam_periph *periph;
+
+ periph = xpt_path_periph(ccb->ccb_h.path);
+ softc = (struct ada_softc *)periph->softc;
+
+ return(cam_periph_error(ccb, cam_flags, sense_flags,
+ &softc->saved_ccb));
+}
+
+static void
+adasetgeom(struct cam_periph *periph, struct ccb_getdev *cgd)
+{
+ struct ada_softc *softc = (struct ada_softc *)periph->softc;
+ struct disk_params *dp = &softc->params;
+ u_int64_t lbasize48;
+ u_int32_t lbasize;
+
+ dp->secsize = 512;
+ if ((cgd->ident_data.atavalid & ATA_FLAG_54_58) &&
+ cgd->ident_data.current_heads && cgd->ident_data.current_sectors) {
+ dp->heads = cgd->ident_data.current_heads;
+ dp->secs_per_track = cgd->ident_data.current_sectors;
+ dp->cylinders = cgd->ident_data.cylinders;
+ dp->sectors = (u_int32_t)cgd->ident_data.current_size_1 |
+ ((u_int32_t)cgd->ident_data.current_size_2 << 16);
+ } else {
+ dp->heads = cgd->ident_data.heads;
+ dp->secs_per_track = cgd->ident_data.sectors;
+ dp->cylinders = cgd->ident_data.cylinders;
+ dp->sectors = cgd->ident_data.cylinders * dp->heads * dp->secs_per_track;
+ }
+ lbasize = (u_int32_t)cgd->ident_data.lba_size_1 |
+ ((u_int32_t)cgd->ident_data.lba_size_2 << 16);
+
+ /* does this device need oldstyle CHS addressing */
+// if (!ad_version(cgd->ident_data.version_major) || !lbasize)
+// atadev->flags |= ATA_D_USE_CHS;
+
+ /* use the 28bit LBA size if valid or bigger than the CHS mapping */
+ if (cgd->ident_data.cylinders == 16383 || dp->sectors < lbasize)
+ dp->sectors = lbasize;
+
+ /* use the 48bit LBA size if valid */
+ lbasize48 = ((u_int64_t)cgd->ident_data.lba_size48_1) |
+ ((u_int64_t)cgd->ident_data.lba_size48_2 << 16) |
+ ((u_int64_t)cgd->ident_data.lba_size48_3 << 32) |
+ ((u_int64_t)cgd->ident_data.lba_size48_4 << 48);
+ if ((cgd->ident_data.support.command2 & ATA_SUPPORT_ADDRESS48) &&
+ lbasize48 > ATA_MAX_28BIT_LBA)
+ dp->sectors = lbasize48;
+}
+
+static void
+adasendorderedtag(void *arg)
+{
+ struct ada_softc *softc = arg;
+
+ if (ada_send_ordered) {
+ if ((softc->ordered_tag_count == 0)
+ && ((softc->flags & ADA_FLAG_WENT_IDLE) == 0)) {
+ softc->flags |= ADA_FLAG_NEED_OTAG;
+ }
+ if (softc->outstanding_cmds > 0)
+ softc->flags &= ~ADA_FLAG_WENT_IDLE;
+
+ softc->ordered_tag_count = 0;
+ }
+ /* Queue us up again */
+ callout_reset(&softc->sendordered_c,
+ (ADA_DEFAULT_TIMEOUT * hz) / ADA_ORDEREDTAG_INTERVAL,
+ adasendorderedtag, softc);
+}
+
+/*
+ * Step through all ADA peripheral drivers, and if the device is still open,
+ * sync the disk cache to physical media.
+ */
+static void
+adashutdown(void * arg, int howto)
+{
+ struct cam_periph *periph;
+ struct ada_softc *softc;
+
+ TAILQ_FOREACH(periph, &adadriver.units, unit_links) {
+ union ccb ccb;
+
+ cam_periph_lock(periph);
+ softc = (struct ada_softc *)periph->softc;
+ /*
+ * We only sync the cache if the drive is still open, and
+ * if the drive is capable of it..
+ */
+ if (((softc->flags & ADA_FLAG_OPEN) == 0) ||
+ (softc->flags & ADA_FLAG_CAN_FLUSHCACHE) == 0) {
+ cam_periph_unlock(periph);
+ continue;
+ }
+
+ xpt_setup_ccb(&ccb.ccb_h, periph->path, /*priority*/1);
+
+ ccb.ccb_h.ccb_state = ADA_CCB_DUMP;
+ cam_fill_ataio(&ccb.ataio,
+ 1,
+ adadone,
+ CAM_DIR_NONE,
+ 0,
+ NULL,
+ 0,
+ ada_default_timeout*1000);
+
+ if (softc->flags & ADA_FLAG_CAN_48BIT)
+ ata_48bit_cmd(&ccb.ataio, ATA_FLUSHCACHE48, 0, 0, 0);
+ else
+ ata_48bit_cmd(&ccb.ataio, ATA_FLUSHCACHE, 0, 0, 0);
+ xpt_polled_action(&ccb);
+
+ if ((ccb.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP)
+ xpt_print(periph->path, "Synchronize cache failed\n");
+
+ if ((ccb.ccb_h.status & CAM_DEV_QFRZN) != 0)
+ cam_release_devq(ccb.ccb_h.path,
+ /*relsim_flags*/0,
+ /*reduction*/0,
+ /*timeout*/0,
+ /*getcount_only*/0);
+ cam_periph_unlock(periph);
+ }
+}
+
+#endif /* _KERNEL */
diff --git a/sys/cam/ata/ata_xpt.c b/sys/cam/ata/ata_xpt.c
new file mode 100644
index 000000000000..7f8daa2c9412
--- /dev/null
+++ b/sys/cam/ata/ata_xpt.c
@@ -0,0 +1,1895 @@
+/*-
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+#include <sys/bus.h>
+#include <sys/endian.h>
+#include <sys/systm.h>
+#include <sys/types.h>
+#include <sys/malloc.h>
+#include <sys/kernel.h>
+#include <sys/time.h>
+#include <sys/conf.h>
+#include <sys/fcntl.h>
+#include <sys/md5.h>
+#include <sys/interrupt.h>
+#include <sys/sbuf.h>
+
+#include <sys/lock.h>
+#include <sys/mutex.h>
+#include <sys/sysctl.h>
+
+#ifdef PC98
+#include <pc98/pc98/pc98_machdep.h> /* geometry translation */
+#endif
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_queue.h>
+#include <cam/cam_periph.h>
+#include <cam/cam_sim.h>
+#include <cam/cam_xpt.h>
+#include <cam/cam_xpt_sim.h>
+#include <cam/cam_xpt_periph.h>
+#include <cam/cam_xpt_internal.h>
+#include <cam/cam_debug.h>
+
+#include <cam/scsi/scsi_all.h>
+#include <cam/scsi/scsi_message.h>
+#include <cam/scsi/scsi_pass.h>
+#include <cam/ata/ata_all.h>
+#include <machine/stdarg.h> /* for xpt_print below */
+#include "opt_cam.h"
+
+struct scsi_quirk_entry {
+ struct scsi_inquiry_pattern inq_pat;
+ u_int8_t quirks;
+#define CAM_QUIRK_NOLUNS 0x01
+#define CAM_QUIRK_NOSERIAL 0x02
+#define CAM_QUIRK_HILUNS 0x04
+#define CAM_QUIRK_NOHILUNS 0x08
+ u_int mintags;
+ u_int maxtags;
+};
+#define SCSI_QUIRK(dev) ((struct scsi_quirk_entry *)((dev)->quirk))
+
+static periph_init_t probe_periph_init;
+
+static struct periph_driver probe_driver =
+{
+ probe_periph_init, "probe",
+ TAILQ_HEAD_INITIALIZER(probe_driver.units)
+};
+
+PERIPHDRIVER_DECLARE(probe, probe_driver);
+
+typedef enum {
+ PROBE_RESET,
+ PROBE_IDENTIFY,
+ PROBE_SETMODE,
+ PROBE_INQUIRY,
+ PROBE_FULL_INQUIRY,
+ PROBE_PM_PID,
+ PROBE_PM_PRV,
+ PROBE_PM_PORTS,
+ PROBE_PM_RESET,
+ PROBE_PM_CONNECT,
+ PROBE_PM_CHECK,
+ PROBE_PM_CLEAR,
+ PROBE_INVALID
+} probe_action;
+
+static char *probe_action_text[] = {
+ "PROBE_RESET",
+ "PROBE_IDENTIFY",
+ "PROBE_SETMODE",
+ "PROBE_INQUIRY",
+ "PROBE_FULL_INQUIRY",
+ "PROBE_PM_PID",
+ "PROBE_PM_PRV",
+ "PROBE_PM_PORTS",
+ "PROBE_PM_RESET",
+ "PROBE_PM_CONNECT",
+ "PROBE_PM_CHECK",
+ "PROBE_PM_CLEAR",
+ "PROBE_INVALID"
+};
+
+#define PROBE_SET_ACTION(softc, newaction) \
+do { \
+ char **text; \
+ text = probe_action_text; \
+ CAM_DEBUG((softc)->periph->path, CAM_DEBUG_INFO, \
+ ("Probe %s to %s\n", text[(softc)->action], \
+ text[(newaction)])); \
+ (softc)->action = (newaction); \
+} while(0)
+
+typedef enum {
+ PROBE_NO_ANNOUNCE = 0x04
+} probe_flags;
+
+typedef struct {
+ TAILQ_HEAD(, ccb_hdr) request_ccbs;
+ probe_action action;
+ union ccb saved_ccb;
+ probe_flags flags;
+ u_int8_t digest[16];
+ uint32_t pm_pid;
+ uint32_t pm_prv;
+ int pm_ports;
+ int pm_step;
+ int pm_try;
+ struct cam_periph *periph;
+} probe_softc;
+
+static struct scsi_quirk_entry scsi_quirk_table[] =
+{
+ {
+ /* Default tagged queuing parameters for all devices */
+ {
+ T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED,
+ /*vendor*/"*", /*product*/"*", /*revision*/"*"
+ },
+ /*quirks*/0, /*mintags*/2, /*maxtags*/32
+ },
+};
+
+static const int scsi_quirk_table_size =
+ sizeof(scsi_quirk_table) / sizeof(*scsi_quirk_table);
+
+static cam_status proberegister(struct cam_periph *periph,
+ void *arg);
+static void probeschedule(struct cam_periph *probe_periph);
+static void probestart(struct cam_periph *periph, union ccb *start_ccb);
+//static void proberequestdefaultnegotiation(struct cam_periph *periph);
+//static int proberequestbackoff(struct cam_periph *periph,
+// struct cam_ed *device);
+static void probedone(struct cam_periph *periph, union ccb *done_ccb);
+static void probecleanup(struct cam_periph *periph);
+static void scsi_find_quirk(struct cam_ed *device);
+static void ata_scan_bus(struct cam_periph *periph, union ccb *ccb);
+static void ata_scan_lun(struct cam_periph *periph,
+ struct cam_path *path, cam_flags flags,
+ union ccb *ccb);
+static void xptscandone(struct cam_periph *periph, union ccb *done_ccb);
+static struct cam_ed *
+ ata_alloc_device(struct cam_eb *bus, struct cam_et *target,
+ lun_id_t lun_id);
+static void ata_device_transport(struct cam_path *path);
+static void scsi_set_transfer_settings(struct ccb_trans_settings *cts,
+ struct cam_ed *device,
+ int async_update);
+static void scsi_toggle_tags(struct cam_path *path);
+static void ata_dev_async(u_int32_t async_code,
+ struct cam_eb *bus,
+ struct cam_et *target,
+ struct cam_ed *device,
+ void *async_arg);
+static void ata_action(union ccb *start_ccb);
+
+static struct xpt_xport ata_xport = {
+ .alloc_device = ata_alloc_device,
+ .action = ata_action,
+ .async = ata_dev_async,
+};
+
+struct xpt_xport *
+ata_get_xport(void)
+{
+ return (&ata_xport);
+}
+
+static void
+probe_periph_init()
+{
+}
+
+static cam_status
+proberegister(struct cam_periph *periph, void *arg)
+{
+ union ccb *request_ccb; /* CCB representing the probe request */
+ cam_status status;
+ probe_softc *softc;
+
+ request_ccb = (union ccb *)arg;
+ if (periph == NULL) {
+ printf("proberegister: periph was NULL!!\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ if (request_ccb == NULL) {
+ printf("proberegister: no probe CCB, "
+ "can't register device\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ softc = (probe_softc *)malloc(sizeof(*softc), M_CAMXPT, M_NOWAIT);
+
+ if (softc == NULL) {
+ printf("proberegister: Unable to probe new device. "
+ "Unable to allocate softc\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+ TAILQ_INIT(&softc->request_ccbs);
+ TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
+ periph_links.tqe);
+ softc->flags = 0;
+ periph->softc = softc;
+ softc->periph = periph;
+ softc->action = PROBE_INVALID;
+ status = cam_periph_acquire(periph);
+ if (status != CAM_REQ_CMP) {
+ return (status);
+ }
+
+
+ /*
+ * Ensure we've waited at least a bus settle
+ * delay before attempting to probe the device.
+ * For HBAs that don't do bus resets, this won't make a difference.
+ */
+ cam_periph_freeze_after_event(periph, &periph->path->bus->last_reset,
+ scsi_delay);
+ probeschedule(periph);
+ return(CAM_REQ_CMP);
+}
+
+static void
+probeschedule(struct cam_periph *periph)
+{
+ struct ccb_pathinq cpi;
+ union ccb *ccb;
+ probe_softc *softc;
+
+ softc = (probe_softc *)periph->softc;
+ ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
+
+ xpt_setup_ccb(&cpi.ccb_h, periph->path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ if (periph->path->device->flags & CAM_DEV_UNCONFIGURED)
+ PROBE_SET_ACTION(softc, PROBE_RESET);
+ else if (periph->path->device->protocol == PROTO_SATAPM)
+ PROBE_SET_ACTION(softc, PROBE_PM_PID);
+ else
+ PROBE_SET_ACTION(softc, PROBE_IDENTIFY);
+
+ if (ccb->crcn.flags & CAM_EXPECT_INQ_CHANGE)
+ softc->flags |= PROBE_NO_ANNOUNCE;
+ else
+ softc->flags &= ~PROBE_NO_ANNOUNCE;
+
+ xpt_schedule(periph, ccb->ccb_h.pinfo.priority);
+}
+
+static void
+probestart(struct cam_periph *periph, union ccb *start_ccb)
+{
+ /* Probe the device that our peripheral driver points to */
+ struct ccb_ataio *ataio;
+ struct ccb_scsiio *csio;
+ struct ccb_trans_settings cts;
+ probe_softc *softc;
+
+ CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probestart\n"));
+
+ softc = (probe_softc *)periph->softc;
+ ataio = &start_ccb->ataio;
+ csio = &start_ccb->csio;
+
+ switch (softc->action) {
+ case PROBE_RESET:
+ if (start_ccb->ccb_h.target_id == 15) {
+ /* Report SIM that we have no knowledge about PM presence. */
+ bzero(&cts, sizeof(cts));
+ xpt_setup_ccb(&cts.ccb_h, start_ccb->ccb_h.path, 1);
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ cts.xport_specific.sata.pm_present = 0;
+ cts.xport_specific.sata.valid = CTS_SATA_VALID_PM;
+ xpt_action((union ccb *)&cts);
+ }
+ cam_fill_ataio(ataio,
+ 0,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ (start_ccb->ccb_h.target_id == 15 ? 3 : 15) * 1000);
+ ata_reset_cmd(ataio);
+ break;
+ case PROBE_IDENTIFY:
+ {
+ struct ata_params *ident_buf =
+ &periph->path->device->ident_data;
+
+ if ((periph->path->device->flags & CAM_DEV_UNCONFIGURED) == 0) {
+ /* Prepare check that it is the same device. */
+ MD5_CTX context;
+
+ MD5Init(&context);
+ MD5Update(&context,
+ (unsigned char *)ident_buf->model,
+ sizeof(ident_buf->model));
+ MD5Update(&context,
+ (unsigned char *)ident_buf->revision,
+ sizeof(ident_buf->revision));
+ MD5Update(&context,
+ (unsigned char *)ident_buf->serial,
+ sizeof(ident_buf->serial));
+ MD5Final(softc->digest, &context);
+ }
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_IN,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/(u_int8_t *)ident_buf,
+ /*dxfer_len*/sizeof(struct ata_params),
+ 30 * 1000);
+ if (periph->path->device->protocol == PROTO_ATA)
+ ata_36bit_cmd(ataio, ATA_ATA_IDENTIFY, 0, 0, 0);
+ else
+ ata_36bit_cmd(ataio, ATA_ATAPI_IDENTIFY, 0, 0, 0);
+ break;
+ }
+ case PROBE_SETMODE:
+ {
+ struct ata_params *ident_buf =
+ &periph->path->device->ident_data;
+
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_IN,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/(u_int8_t *)ident_buf,
+ /*dxfer_len*/sizeof(struct ata_params),
+ 30 * 1000);
+ ata_36bit_cmd(ataio, ATA_SETFEATURES, ATA_SF_SETXFER, 0,
+ ata_max_mode(ident_buf, ATA_UDMA6, ATA_UDMA6));
+ break;
+ }
+ case PROBE_INQUIRY:
+ case PROBE_FULL_INQUIRY:
+ {
+ u_int inquiry_len;
+ struct scsi_inquiry_data *inq_buf =
+ &periph->path->device->inq_data;
+
+ if (softc->action == PROBE_INQUIRY)
+ inquiry_len = SHORT_INQUIRY_LENGTH;
+ else
+ inquiry_len = SID_ADDITIONAL_LENGTH(inq_buf);
+ /*
+ * Some parallel SCSI devices fail to send an
+ * ignore wide residue message when dealing with
+ * odd length inquiry requests. Round up to be
+ * safe.
+ */
+ inquiry_len = roundup2(inquiry_len, 2);
+ scsi_inquiry(csio,
+ /*retries*/1,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ (u_int8_t *)inq_buf,
+ inquiry_len,
+ /*evpd*/FALSE,
+ /*page_code*/0,
+ SSD_MIN_SIZE,
+ /*timeout*/60 * 1000);
+ break;
+ }
+ case PROBE_PM_PID:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_read_cmd(ataio, 0, 15);
+ break;
+ case PROBE_PM_PRV:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_read_cmd(ataio, 1, 15);
+ break;
+ case PROBE_PM_PORTS:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_read_cmd(ataio, 2, 15);
+ break;
+ case PROBE_PM_RESET:
+ {
+ struct ata_params *ident_buf =
+ &periph->path->device->ident_data;
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_write_cmd(ataio, 2, softc->pm_step,
+ (ident_buf->cylinders & (1 << softc->pm_step)) ? 0 : 1);
+printf("PM RESET %d %04x %d\n", softc->pm_step, ident_buf->cylinders,
+ (ident_buf->cylinders & (1 << softc->pm_step)) ? 0 : 1);
+ break;
+ }
+ case PROBE_PM_CONNECT:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_write_cmd(ataio, 2, softc->pm_step, 0);
+ break;
+ case PROBE_PM_CHECK:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_read_cmd(ataio, 0, softc->pm_step);
+ break;
+ case PROBE_PM_CLEAR:
+ cam_fill_ataio(ataio,
+ 1,
+ probedone,
+ /*flags*/CAM_DIR_NONE,
+ MSG_SIMPLE_Q_TAG,
+ /*data_ptr*/NULL,
+ /*dxfer_len*/0,
+ 10 * 1000);
+ ata_pm_write_cmd(ataio, 1, softc->pm_step, 0xFFFFFFFF);
+ break;
+ case PROBE_INVALID:
+ CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_INFO,
+ ("probestart: invalid action state\n"));
+ default:
+ break;
+ }
+ xpt_action(start_ccb);
+}
+#if 0
+static void
+proberequestdefaultnegotiation(struct cam_periph *periph)
+{
+ struct ccb_trans_settings cts;
+
+ xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_USER_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ return;
+ }
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+}
+
+/*
+ * Backoff Negotiation Code- only pertinent for SPI devices.
+ */
+static int
+proberequestbackoff(struct cam_periph *periph, struct cam_ed *device)
+{
+ struct ccb_trans_settings cts;
+ struct ccb_trans_settings_spi *spi;
+
+ memset(&cts, 0, sizeof (cts));
+ xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ if (bootverbose) {
+ xpt_print(periph->path,
+ "failed to get current device settings\n");
+ }
+ return (0);
+ }
+ if (cts.transport != XPORT_SPI) {
+ if (bootverbose) {
+ xpt_print(periph->path, "not SPI transport\n");
+ }
+ return (0);
+ }
+ spi = &cts.xport_specific.spi;
+
+ /*
+ * We cannot renegotiate sync rate if we don't have one.
+ */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0) {
+ if (bootverbose) {
+ xpt_print(periph->path, "no sync rate known\n");
+ }
+ return (0);
+ }
+
+ /*
+ * We'll assert that we don't have to touch PPR options- the
+ * SIM will see what we do with period and offset and adjust
+ * the PPR options as appropriate.
+ */
+
+ /*
+ * A sync rate with unknown or zero offset is nonsensical.
+ * A sync period of zero means Async.
+ */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0
+ || spi->sync_offset == 0 || spi->sync_period == 0) {
+ if (bootverbose) {
+ xpt_print(periph->path, "no sync rate available\n");
+ }
+ return (0);
+ }
+
+ if (device->flags & CAM_DEV_DV_HIT_BOTTOM) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("hit async: giving up on DV\n"));
+ return (0);
+ }
+
+
+ /*
+ * Jump sync_period up by one, but stop at 5MHz and fall back to Async.
+ * We don't try to remember 'last' settings to see if the SIM actually
+ * gets into the speed we want to set. We check on the SIM telling
+ * us that a requested speed is bad, but otherwise don't try and
+ * check the speed due to the asynchronous and handshake nature
+ * of speed setting.
+ */
+ spi->valid = CTS_SPI_VALID_SYNC_RATE | CTS_SPI_VALID_SYNC_OFFSET;
+ for (;;) {
+ spi->sync_period++;
+ if (spi->sync_period >= 0xf) {
+ spi->sync_period = 0;
+ spi->sync_offset = 0;
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("setting to async for DV\n"));
+ /*
+ * Once we hit async, we don't want to try
+ * any more settings.
+ */
+ device->flags |= CAM_DEV_DV_HIT_BOTTOM;
+ } else if (bootverbose) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("DV: period 0x%x\n", spi->sync_period));
+ printf("setting period to 0x%x\n", spi->sync_period);
+ }
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ break;
+ }
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("DV: failed to set period 0x%x\n", spi->sync_period));
+ if (spi->sync_period == 0) {
+ return (0);
+ }
+ }
+ return (1);
+}
+#endif
+static void
+probedone(struct cam_periph *periph, union ccb *done_ccb)
+{
+ struct ata_params *ident_buf;
+ probe_softc *softc;
+ struct cam_path *path;
+ u_int32_t priority;
+ int found = 0;
+
+ CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probedone\n"));
+
+ softc = (probe_softc *)periph->softc;
+ path = done_ccb->ccb_h.path;
+ priority = done_ccb->ccb_h.pinfo.priority;
+ ident_buf = &path->device->ident_data;
+
+ switch (softc->action) {
+ case PROBE_RESET:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ int sign = (done_ccb->ataio.res.lba_high << 8) +
+ done_ccb->ataio.res.lba_mid;
+ xpt_print(path, "SIGNATURE: %04x\n", sign);
+ if (sign == 0x0000 &&
+ done_ccb->ccb_h.target_id != 15) {
+ path->device->protocol = PROTO_ATA;
+ PROBE_SET_ACTION(softc, PROBE_IDENTIFY);
+ } else if (sign == 0x9669 &&
+ done_ccb->ccb_h.target_id == 15) {
+ struct ccb_trans_settings cts;
+
+ /* Report SIM that PM is present. */
+ bzero(&cts, sizeof(cts));
+ xpt_setup_ccb(&cts.ccb_h, path, 1);
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ cts.xport_specific.sata.pm_present = 1;
+ cts.xport_specific.sata.valid = CTS_SATA_VALID_PM;
+ xpt_action((union ccb *)&cts);
+ path->device->protocol = PROTO_SATAPM;
+ PROBE_SET_ACTION(softc, PROBE_PM_PID);
+ } else if (sign == 0xeb14 &&
+ done_ccb->ccb_h.target_id != 15) {
+ path->device->protocol = PROTO_SCSI;
+ PROBE_SET_ACTION(softc, PROBE_IDENTIFY);
+ } else {
+ if (done_ccb->ccb_h.target_id != 15) {
+ xpt_print(path,
+ "Unexpected signature 0x%04x\n", sign);
+ }
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_IDENTIFY:
+ {
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ int16_t *ptr;
+
+ for (ptr = (int16_t *)ident_buf;
+ ptr < (int16_t *)ident_buf + sizeof(struct ata_params)/2; ptr++) {
+ *ptr = le16toh(*ptr);
+ }
+ if (strncmp(ident_buf->model, "FX", 2) &&
+ strncmp(ident_buf->model, "NEC", 3) &&
+ strncmp(ident_buf->model, "Pioneer", 7) &&
+ strncmp(ident_buf->model, "SHARP", 5)) {
+ ata_bswap(ident_buf->model, sizeof(ident_buf->model));
+ ata_bswap(ident_buf->revision, sizeof(ident_buf->revision));
+ ata_bswap(ident_buf->serial, sizeof(ident_buf->serial));
+ }
+ ata_btrim(ident_buf->model, sizeof(ident_buf->model));
+ ata_bpack(ident_buf->model, ident_buf->model, sizeof(ident_buf->model));
+ ata_btrim(ident_buf->revision, sizeof(ident_buf->revision));
+ ata_bpack(ident_buf->revision, ident_buf->revision, sizeof(ident_buf->revision));
+ ata_btrim(ident_buf->serial, sizeof(ident_buf->serial));
+ ata_bpack(ident_buf->serial, ident_buf->serial, sizeof(ident_buf->serial));
+
+ if ((periph->path->device->flags & CAM_DEV_UNCONFIGURED) == 0) {
+ /* Check that it is the same device. */
+ MD5_CTX context;
+ u_int8_t digest[16];
+
+ MD5Init(&context);
+ MD5Update(&context,
+ (unsigned char *)ident_buf->model,
+ sizeof(ident_buf->model));
+ MD5Update(&context,
+ (unsigned char *)ident_buf->revision,
+ sizeof(ident_buf->revision));
+ MD5Update(&context,
+ (unsigned char *)ident_buf->serial,
+ sizeof(ident_buf->serial));
+ MD5Final(digest, &context);
+ if (bcmp(digest, softc->digest, sizeof(digest))) {
+ /* Device changed. */
+ xpt_async(AC_LOST_DEVICE, path, NULL);
+ }
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+
+ /* Clean up from previous instance of this device */
+ if (path->device->serial_num != NULL) {
+ free(path->device->serial_num, M_CAMXPT);
+ path->device->serial_num = NULL;
+ path->device->serial_num_len = 0;
+ }
+ path->device->serial_num =
+ (u_int8_t *)malloc((sizeof(ident_buf->serial) + 1),
+ M_CAMXPT, M_NOWAIT);
+ if (path->device->serial_num != NULL) {
+ bcopy(ident_buf->serial,
+ path->device->serial_num,
+ sizeof(ident_buf->serial));
+ path->device->serial_num[sizeof(ident_buf->serial)]
+ = '\0';
+ path->device->serial_num_len =
+ strlen(path->device->serial_num);
+ }
+
+ path->device->flags |= CAM_DEV_INQUIRY_DATA_VALID;
+
+ scsi_find_quirk(path->device);
+ ata_device_transport(path);
+
+ PROBE_SET_ACTION(softc, PROBE_SETMODE);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+device_fail:
+ /*
+ * If we get to this point, we got an error status back
+ * from the inquiry and the error status doesn't require
+ * automatically retrying the command. Therefore, the
+ * inquiry failed. If we had inquiry information before
+ * for this device, but this latest inquiry command failed,
+ * the device has probably gone away. If this device isn't
+ * already marked unconfigured, notify the peripheral
+ * drivers that this device is no more.
+ */
+ if ((path->device->flags & CAM_DEV_UNCONFIGURED) == 0)
+ /* Send the async notification. */
+ xpt_async(AC_LOST_DEVICE, path, NULL);
+
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ case PROBE_SETMODE:
+ {
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ if (path->device->protocol == PROTO_ATA) {
+ path->device->flags &= ~CAM_DEV_UNCONFIGURED;
+ done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+ xpt_action(done_ccb);
+ xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
+ done_ccb);
+ xpt_release_ccb(done_ccb);
+ break;
+ } else {
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ }
+ case PROBE_INQUIRY:
+ case PROBE_FULL_INQUIRY:
+ {
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ struct scsi_inquiry_data *inq_buf;
+ u_int8_t periph_qual;
+
+ path->device->flags |= CAM_DEV_INQUIRY_DATA_VALID;
+ inq_buf = &path->device->inq_data;
+
+ periph_qual = SID_QUAL(inq_buf);
+
+ if (periph_qual == SID_QUAL_LU_CONNECTED) {
+ u_int8_t len;
+
+ /*
+ * We conservatively request only
+ * SHORT_INQUIRY_LEN bytes of inquiry
+ * information during our first try
+ * at sending an INQUIRY. If the device
+ * has more information to give,
+ * perform a second request specifying
+ * the amount of information the device
+ * is willing to give.
+ */
+ len = inq_buf->additional_length
+ + offsetof(struct scsi_inquiry_data,
+ additional_length) + 1;
+ if (softc->action == PROBE_INQUIRY
+ && len > SHORT_INQUIRY_LENGTH) {
+ PROBE_SET_ACTION(softc, PROBE_FULL_INQUIRY);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+
+ scsi_find_quirk(path->device);
+
+// scsi_devise_transport(path);
+ path->device->flags &= ~CAM_DEV_UNCONFIGURED;
+ done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+ xpt_action(done_ccb);
+ xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
+ done_ccb);
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ }
+ case PROBE_PM_PID:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ if ((path->device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0)
+ bzero(ident_buf, sizeof(*ident_buf));
+ softc->pm_pid = (done_ccb->ataio.res.lba_high << 24) +
+ (done_ccb->ataio.res.lba_mid << 16) +
+ (done_ccb->ataio.res.lba_low << 8) +
+ done_ccb->ataio.res.sector_count;
+ printf("PM Product ID: %08x\n", softc->pm_pid);
+ snprintf(ident_buf->model, sizeof(ident_buf->model),
+ "Port Multiplier %08x", softc->pm_pid);
+ PROBE_SET_ACTION(softc, PROBE_PM_PRV);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_PRV:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ softc->pm_prv = (done_ccb->ataio.res.lba_high << 24) +
+ (done_ccb->ataio.res.lba_mid << 16) +
+ (done_ccb->ataio.res.lba_low << 8) +
+ done_ccb->ataio.res.sector_count;
+ printf("PM Revision: %08x\n", softc->pm_prv);
+ snprintf(ident_buf->revision, sizeof(ident_buf->revision),
+ "%04x", softc->pm_prv);
+ PROBE_SET_ACTION(softc, PROBE_PM_PORTS);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_PORTS:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ softc->pm_ports = (done_ccb->ataio.res.lba_high << 24) +
+ (done_ccb->ataio.res.lba_mid << 16) +
+ (done_ccb->ataio.res.lba_low << 8) +
+ done_ccb->ataio.res.sector_count;
+ /* This PM declares 6 ports, while only 5 of them are real.
+ * Port 5 is enclosure management bridge port, which has implementation
+ * problems, causing probe faults. Hide it for now. */
+ if (softc->pm_pid == 0x37261095 && softc->pm_ports == 6)
+ softc->pm_ports = 5;
+ /* This PM declares 7 ports, while only 5 of them are real.
+ * Port 5 is some fake "Config Disk" with 640 sectors size,
+ * port 6 is enclosure management bridge port.
+ * Both fake ports has implementation problems, causing
+ * probe faults. Hide them for now. */
+ if (softc->pm_pid == 0x47261095 && softc->pm_ports == 7)
+ softc->pm_ports = 5;
+ printf("PM ports: %d\n", softc->pm_ports);
+ ident_buf->config = softc->pm_ports;
+ path->device->flags |= CAM_DEV_INQUIRY_DATA_VALID;
+ softc->pm_step = 0;
+ PROBE_SET_ACTION(softc, PROBE_PM_RESET);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_RESET:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ softc->pm_step++;
+ if (softc->pm_step < softc->pm_ports) {
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else {
+ softc->pm_step = 0;
+ DELAY(5000);
+ printf("PM reset done\n");
+ PROBE_SET_ACTION(softc, PROBE_PM_CONNECT);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_CONNECT:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ softc->pm_step++;
+ if (softc->pm_step < softc->pm_ports) {
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else {
+ softc->pm_step = 0;
+ softc->pm_try = 0;
+ printf("PM connect done\n");
+ PROBE_SET_ACTION(softc, PROBE_PM_CHECK);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_CHECK:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ int res = (done_ccb->ataio.res.lba_high << 24) +
+ (done_ccb->ataio.res.lba_mid << 16) +
+ (done_ccb->ataio.res.lba_low << 8) +
+ done_ccb->ataio.res.sector_count;
+ if ((res & 0xf0f) == 0x103 && (res & 0x0f0) != 0) {
+ printf("PM status: %d - %08x\n", softc->pm_step, res);
+ ident_buf->cylinders |= (1 << softc->pm_step);
+ softc->pm_step++;
+ } else {
+ if (softc->pm_try < 100) {
+ DELAY(10000);
+ softc->pm_try++;
+ } else {
+ printf("PM status: %d - %08x\n", softc->pm_step, res);
+ ident_buf->cylinders &= ~(1 << softc->pm_step);
+ softc->pm_step++;
+ }
+ }
+ if (softc->pm_step < softc->pm_ports) {
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ } else {
+ softc->pm_step = 0;
+ PROBE_SET_ACTION(softc, PROBE_PM_CLEAR);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_PM_CLEAR:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ softc->pm_step++;
+ if (softc->pm_step < softc->pm_ports) {
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ found = ident_buf->cylinders | 0x8000;
+ if (path->device->flags & CAM_DEV_UNCONFIGURED) {
+ path->device->flags &= ~CAM_DEV_UNCONFIGURED;
+ done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+ xpt_action(done_ccb);
+ xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
+ done_ccb);
+ xpt_release_ccb(done_ccb);
+ }
+ break;
+ } else if (cam_periph_error(done_ccb, 0, 0,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ goto device_fail;
+ case PROBE_INVALID:
+ CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_INFO,
+ ("probedone: invalid action state\n"));
+ default:
+ break;
+ }
+ done_ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
+ TAILQ_REMOVE(&softc->request_ccbs, &done_ccb->ccb_h, periph_links.tqe);
+ done_ccb->ccb_h.status = CAM_REQ_CMP;
+ done_ccb->ccb_h.ppriv_field1 = found;
+ xpt_done(done_ccb);
+ if (TAILQ_FIRST(&softc->request_ccbs) == NULL) {
+ cam_periph_invalidate(periph);
+ cam_periph_release_locked(periph);
+ } else {
+ probeschedule(periph);
+ }
+}
+
+static void
+probecleanup(struct cam_periph *periph)
+{
+ free(periph->softc, M_CAMXPT);
+}
+
+static void
+scsi_find_quirk(struct cam_ed *device)
+{
+ struct scsi_quirk_entry *quirk;
+ caddr_t match;
+
+ match = cam_quirkmatch((caddr_t)&device->inq_data,
+ (caddr_t)scsi_quirk_table,
+ sizeof(scsi_quirk_table) /
+ sizeof(*scsi_quirk_table),
+ sizeof(*scsi_quirk_table), scsi_inquiry_match);
+
+ if (match == NULL)
+ panic("xpt_find_quirk: device didn't match wildcard entry!!");
+
+ quirk = (struct scsi_quirk_entry *)match;
+ device->quirk = quirk;
+ device->mintags = quirk->mintags;
+ device->maxtags = quirk->maxtags;
+}
+
+typedef struct {
+ union ccb *request_ccb;
+ struct ccb_pathinq *cpi;
+ int counter;
+ int found;
+} ata_scan_bus_info;
+
+/*
+ * To start a scan, request_ccb is an XPT_SCAN_BUS ccb.
+ * As the scan progresses, xpt_scan_bus is used as the
+ * callback on completion function.
+ */
+static void
+ata_scan_bus(struct cam_periph *periph, union ccb *request_ccb)
+{
+ struct cam_path *path;
+ ata_scan_bus_info *scan_info;
+ union ccb *work_ccb;
+ cam_status status;
+
+ CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
+ ("xpt_scan_bus\n"));
+ switch (request_ccb->ccb_h.func_code) {
+ case XPT_SCAN_BUS:
+ /* Find out the characteristics of the bus */
+ work_ccb = xpt_alloc_ccb_nowait();
+ if (work_ccb == NULL) {
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ return;
+ }
+ xpt_setup_ccb(&work_ccb->ccb_h, request_ccb->ccb_h.path,
+ request_ccb->ccb_h.pinfo.priority);
+ work_ccb->ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action(work_ccb);
+ if (work_ccb->ccb_h.status != CAM_REQ_CMP) {
+ request_ccb->ccb_h.status = work_ccb->ccb_h.status;
+ xpt_free_ccb(work_ccb);
+ xpt_done(request_ccb);
+ return;
+ }
+
+ /* Save some state for use while we probe for devices */
+ scan_info = (ata_scan_bus_info *)
+ malloc(sizeof(ata_scan_bus_info), M_CAMXPT, M_NOWAIT);
+ if (scan_info == NULL) {
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ return;
+ }
+ scan_info->request_ccb = request_ccb;
+ scan_info->cpi = &work_ccb->cpi;
+ scan_info->found = 0x8001;
+ scan_info->counter = 0;
+ /* If PM supported, probe it first. */
+ if (scan_info->cpi->hba_inquiry & PI_SATAPM)
+ scan_info->counter = 15;
+
+ work_ccb = xpt_alloc_ccb_nowait();
+ if (work_ccb == NULL) {
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ break;
+ }
+ goto scan_next;
+ case XPT_SCAN_LUN:
+ work_ccb = request_ccb;
+ /* Reuse the same CCB to query if a device was really found */
+ scan_info = (ata_scan_bus_info *)work_ccb->ccb_h.ppriv_ptr0;
+ /* Free the current request path- we're done with it. */
+ xpt_free_path(work_ccb->ccb_h.path);
+ /* If there is PM... */
+ if (scan_info->counter == 15) {
+ if (work_ccb->ccb_h.ppriv_field1 != 0) {
+ /* Save PM probe result. */
+ scan_info->found = work_ccb->ccb_h.ppriv_field1;
+ } else {
+ struct ccb_trans_settings cts;
+
+ /* Report SIM that PM is absent. */
+ bzero(&cts, sizeof(cts));
+ xpt_setup_ccb(&cts.ccb_h,
+ scan_info->request_ccb->ccb_h.path, 1);
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ cts.xport_specific.sata.pm_present = 1;
+ cts.xport_specific.sata.valid = CTS_SATA_VALID_PM;
+ xpt_action((union ccb *)&cts);
+ }
+ }
+take_next:
+ /* Take next device. Wrap from 15 (PM) to 0. */
+ scan_info->counter = (scan_info->counter + 1 ) & 0x0f;
+ if (scan_info->counter >= scan_info->cpi->max_target+1) {
+ xpt_free_ccb(work_ccb);
+ xpt_free_ccb((union ccb *)scan_info->cpi);
+ request_ccb = scan_info->request_ccb;
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(request_ccb);
+ break;
+ }
+scan_next:
+ status = xpt_create_path(&path, xpt_periph,
+ scan_info->request_ccb->ccb_h.path_id,
+ scan_info->counter, 0);
+ if (status != CAM_REQ_CMP) {
+ printf("xpt_scan_bus: xpt_create_path failed"
+ " with status %#x, bus scan halted\n",
+ status);
+ xpt_free_ccb(work_ccb);
+ xpt_free_ccb((union ccb *)scan_info->cpi);
+ request_ccb = scan_info->request_ccb;
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = status;
+ xpt_done(request_ccb);
+ break;
+ }
+ if ((scan_info->found & (1 << scan_info->counter)) == 0) {
+ xpt_async(AC_LOST_DEVICE, path, NULL);
+ xpt_free_path(path);
+ goto take_next;
+ }
+ xpt_setup_ccb(&work_ccb->ccb_h, path,
+ scan_info->request_ccb->ccb_h.pinfo.priority);
+ work_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ work_ccb->ccb_h.cbfcnp = ata_scan_bus;
+ work_ccb->ccb_h.ppriv_ptr0 = scan_info;
+ work_ccb->crcn.flags = scan_info->request_ccb->crcn.flags;
+ xpt_action(work_ccb);
+ break;
+ default:
+ break;
+ }
+}
+
+static void
+ata_scan_lun(struct cam_periph *periph, struct cam_path *path,
+ cam_flags flags, union ccb *request_ccb)
+{
+ struct ccb_pathinq cpi;
+ cam_status status;
+ struct cam_path *new_path;
+ struct cam_periph *old_periph;
+
+ CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
+ ("xpt_scan_lun\n"));
+
+ xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ if (cpi.ccb_h.status != CAM_REQ_CMP) {
+ if (request_ccb != NULL) {
+ request_ccb->ccb_h.status = cpi.ccb_h.status;
+ xpt_done(request_ccb);
+ }
+ return;
+ }
+
+ if (request_ccb == NULL) {
+ request_ccb = malloc(sizeof(union ccb), M_CAMXPT, M_NOWAIT);
+ if (request_ccb == NULL) {
+ xpt_print(path, "xpt_scan_lun: can't allocate CCB, "
+ "can't continue\n");
+ return;
+ }
+ new_path = malloc(sizeof(*new_path), M_CAMXPT, M_NOWAIT);
+ if (new_path == NULL) {
+ xpt_print(path, "xpt_scan_lun: can't allocate path, "
+ "can't continue\n");
+ free(request_ccb, M_CAMXPT);
+ return;
+ }
+ status = xpt_compile_path(new_path, xpt_periph,
+ path->bus->path_id,
+ path->target->target_id,
+ path->device->lun_id);
+
+ if (status != CAM_REQ_CMP) {
+ xpt_print(path, "xpt_scan_lun: can't compile path, "
+ "can't continue\n");
+ free(request_ccb, M_CAMXPT);
+ free(new_path, M_CAMXPT);
+ return;
+ }
+ xpt_setup_ccb(&request_ccb->ccb_h, new_path, /*priority*/ 1);
+ request_ccb->ccb_h.cbfcnp = xptscandone;
+ request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ request_ccb->crcn.flags = flags;
+ }
+
+ if ((old_periph = cam_periph_find(path, "probe")) != NULL) {
+ probe_softc *softc;
+
+ softc = (probe_softc *)old_periph->softc;
+ TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
+ periph_links.tqe);
+ } else {
+ status = cam_periph_alloc(proberegister, NULL, probecleanup,
+ probestart, "probe",
+ CAM_PERIPH_BIO,
+ request_ccb->ccb_h.path, NULL, 0,
+ request_ccb);
+
+ if (status != CAM_REQ_CMP) {
+ xpt_print(path, "xpt_scan_lun: cam_alloc_periph "
+ "returned an error, can't continue probe\n");
+ request_ccb->ccb_h.status = status;
+ xpt_done(request_ccb);
+ }
+ }
+}
+
+static void
+xptscandone(struct cam_periph *periph, union ccb *done_ccb)
+{
+ xpt_release_path(done_ccb->ccb_h.path);
+ free(done_ccb->ccb_h.path, M_CAMXPT);
+ free(done_ccb, M_CAMXPT);
+}
+
+static struct cam_ed *
+ata_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
+{
+ struct cam_path path;
+ struct scsi_quirk_entry *quirk;
+ struct cam_ed *device;
+ struct cam_ed *cur_device;
+
+ device = xpt_alloc_device(bus, target, lun_id);
+ if (device == NULL)
+ return (NULL);
+
+ /*
+ * Take the default quirk entry until we have inquiry
+ * data and can determine a better quirk to use.
+ */
+ quirk = &scsi_quirk_table[scsi_quirk_table_size - 1];
+ device->quirk = (void *)quirk;
+ device->mintags = quirk->mintags;
+ device->maxtags = quirk->maxtags;
+ bzero(&device->inq_data, sizeof(device->inq_data));
+ device->inq_flags = 0;
+ device->queue_flags = 0;
+ device->serial_num = NULL;
+ device->serial_num_len = 0;
+
+ /*
+ * XXX should be limited by number of CCBs this bus can
+ * do.
+ */
+ bus->sim->max_ccbs += device->ccbq.devq_openings;
+ /* Insertion sort into our target's device list */
+ cur_device = TAILQ_FIRST(&target->ed_entries);
+ while (cur_device != NULL && cur_device->lun_id < lun_id)
+ cur_device = TAILQ_NEXT(cur_device, links);
+ if (cur_device != NULL) {
+ TAILQ_INSERT_BEFORE(cur_device, device, links);
+ } else {
+ TAILQ_INSERT_TAIL(&target->ed_entries, device, links);
+ }
+ target->generation++;
+ if (lun_id != CAM_LUN_WILDCARD) {
+ xpt_compile_path(&path,
+ NULL,
+ bus->path_id,
+ target->target_id,
+ lun_id);
+ ata_device_transport(&path);
+ xpt_release_path(&path);
+ }
+
+ return (device);
+}
+
+static void
+ata_device_transport(struct cam_path *path)
+{
+ struct ccb_pathinq cpi;
+// struct ccb_trans_settings cts;
+ struct scsi_inquiry_data *inq_buf;
+
+ /* Get transport information from the SIM */
+ xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ inq_buf = NULL;
+// if ((path->device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0)
+// inq_buf = &path->device->inq_data;
+// path->device->protocol = cpi.protocol;
+// path->device->protocol_version =
+// inq_buf != NULL ? SID_ANSI_REV(inq_buf) : cpi.protocol_version;
+ path->device->transport = cpi.transport;
+ path->device->transport_version = cpi.transport_version;
+#if 0
+ /*
+ * Any device not using SPI3 features should
+ * be considered SPI2 or lower.
+ */
+ if (inq_buf != NULL) {
+ if (path->device->transport == XPORT_SPI
+ && (inq_buf->spi3data & SID_SPI_MASK) == 0
+ && path->device->transport_version > 2)
+ path->device->transport_version = 2;
+ } else {
+ struct cam_ed* otherdev;
+
+ for (otherdev = TAILQ_FIRST(&path->target->ed_entries);
+ otherdev != NULL;
+ otherdev = TAILQ_NEXT(otherdev, links)) {
+ if (otherdev != path->device)
+ break;
+ }
+
+ if (otherdev != NULL) {
+ /*
+ * Initially assume the same versioning as
+ * prior luns for this target.
+ */
+ path->device->protocol_version =
+ otherdev->protocol_version;
+ path->device->transport_version =
+ otherdev->transport_version;
+ } else {
+ /* Until we know better, opt for safty */
+ path->device->protocol_version = 2;
+ if (path->device->transport == XPORT_SPI)
+ path->device->transport_version = 2;
+ else
+ path->device->transport_version = 0;
+ }
+ }
+
+ /*
+ * XXX
+ * For a device compliant with SPC-2 we should be able
+ * to determine the transport version supported by
+ * scrutinizing the version descriptors in the
+ * inquiry buffer.
+ */
+
+ /* Tell the controller what we think */
+ xpt_setup_ccb(&cts.ccb_h, path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ cts.transport = path->device->transport;
+ cts.transport_version = path->device->transport_version;
+ cts.protocol = path->device->protocol;
+ cts.protocol_version = path->device->protocol_version;
+ cts.proto_specific.valid = 0;
+ cts.xport_specific.valid = 0;
+ xpt_action((union ccb *)&cts);
+#endif
+}
+
+static void
+ata_action(union ccb *start_ccb)
+{
+
+ switch (start_ccb->ccb_h.func_code) {
+ case XPT_SET_TRAN_SETTINGS:
+ {
+ scsi_set_transfer_settings(&start_ccb->cts,
+ start_ccb->ccb_h.path->device,
+ /*async_update*/FALSE);
+ break;
+ }
+ case XPT_SCAN_BUS:
+ ata_scan_bus(start_ccb->ccb_h.path->periph, start_ccb);
+ break;
+ case XPT_SCAN_LUN:
+ ata_scan_lun(start_ccb->ccb_h.path->periph,
+ start_ccb->ccb_h.path, start_ccb->crcn.flags,
+ start_ccb);
+ break;
+ case XPT_GET_TRAN_SETTINGS:
+ {
+ struct cam_sim *sim;
+
+ sim = start_ccb->ccb_h.path->bus->sim;
+ (*(sim->sim_action))(sim, start_ccb);
+ break;
+ }
+ default:
+ xpt_action_default(start_ccb);
+ break;
+ }
+}
+
+static void
+scsi_set_transfer_settings(struct ccb_trans_settings *cts, struct cam_ed *device,
+ int async_update)
+{
+ struct ccb_pathinq cpi;
+ struct ccb_trans_settings cur_cts;
+ struct ccb_trans_settings_scsi *scsi;
+ struct ccb_trans_settings_scsi *cur_scsi;
+ struct cam_sim *sim;
+ struct scsi_inquiry_data *inq_data;
+
+ if (device == NULL) {
+ cts->ccb_h.status = CAM_PATH_INVALID;
+ xpt_done((union ccb *)cts);
+ return;
+ }
+
+ if (cts->protocol == PROTO_UNKNOWN
+ || cts->protocol == PROTO_UNSPECIFIED) {
+ cts->protocol = device->protocol;
+ cts->protocol_version = device->protocol_version;
+ }
+
+ if (cts->protocol_version == PROTO_VERSION_UNKNOWN
+ || cts->protocol_version == PROTO_VERSION_UNSPECIFIED)
+ cts->protocol_version = device->protocol_version;
+
+ if (cts->protocol != device->protocol) {
+ xpt_print(cts->ccb_h.path, "Uninitialized Protocol %x:%x?\n",
+ cts->protocol, device->protocol);
+ cts->protocol = device->protocol;
+ }
+
+ if (cts->protocol_version > device->protocol_version) {
+ if (bootverbose) {
+ xpt_print(cts->ccb_h.path, "Down reving Protocol "
+ "Version from %d to %d?\n", cts->protocol_version,
+ device->protocol_version);
+ }
+ cts->protocol_version = device->protocol_version;
+ }
+
+ if (cts->transport == XPORT_UNKNOWN
+ || cts->transport == XPORT_UNSPECIFIED) {
+ cts->transport = device->transport;
+ cts->transport_version = device->transport_version;
+ }
+
+ if (cts->transport_version == XPORT_VERSION_UNKNOWN
+ || cts->transport_version == XPORT_VERSION_UNSPECIFIED)
+ cts->transport_version = device->transport_version;
+
+ if (cts->transport != device->transport) {
+ xpt_print(cts->ccb_h.path, "Uninitialized Transport %x:%x?\n",
+ cts->transport, device->transport);
+ cts->transport = device->transport;
+ }
+
+ if (cts->transport_version > device->transport_version) {
+ if (bootverbose) {
+ xpt_print(cts->ccb_h.path, "Down reving Transport "
+ "Version from %d to %d?\n", cts->transport_version,
+ device->transport_version);
+ }
+ cts->transport_version = device->transport_version;
+ }
+
+ sim = cts->ccb_h.path->bus->sim;
+
+ /*
+ * Nothing more of interest to do unless
+ * this is a device connected via the
+ * SCSI protocol.
+ */
+ if (cts->protocol != PROTO_SCSI) {
+ if (async_update == FALSE)
+ (*(sim->sim_action))(sim, (union ccb *)cts);
+ return;
+ }
+
+ inq_data = &device->inq_data;
+ scsi = &cts->proto_specific.scsi;
+ xpt_setup_ccb(&cpi.ccb_h, cts->ccb_h.path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ /* SCSI specific sanity checking */
+ if ((cpi.hba_inquiry & PI_TAG_ABLE) == 0
+ || (INQ_DATA_TQ_ENABLED(inq_data)) == 0
+ || (device->queue_flags & SCP_QUEUE_DQUE) != 0
+ || (device->mintags == 0)) {
+ /*
+ * Can't tag on hardware that doesn't support tags,
+ * doesn't have it enabled, or has broken tag support.
+ */
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ }
+
+ if (async_update == FALSE) {
+ /*
+ * Perform sanity checking against what the
+ * controller and device can do.
+ */
+ xpt_setup_ccb(&cur_cts.ccb_h, cts->ccb_h.path, /*priority*/1);
+ cur_cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cur_cts.type = cts->type;
+ xpt_action((union ccb *)&cur_cts);
+ if ((cur_cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ return;
+ }
+ cur_scsi = &cur_cts.proto_specific.scsi;
+ if ((scsi->valid & CTS_SCSI_VALID_TQ) == 0) {
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ scsi->flags |= cur_scsi->flags & CTS_SCSI_FLAGS_TAG_ENB;
+ }
+ if ((cur_scsi->valid & CTS_SCSI_VALID_TQ) == 0)
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ }
+
+ /* SPI specific sanity checking */
+ if (cts->transport == XPORT_SPI && async_update == FALSE) {
+ u_int spi3caps;
+ struct ccb_trans_settings_spi *spi;
+ struct ccb_trans_settings_spi *cur_spi;
+
+ spi = &cts->xport_specific.spi;
+
+ cur_spi = &cur_cts.xport_specific.spi;
+
+ /* Fill in any gaps in what the user gave us */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
+ spi->sync_period = cur_spi->sync_period;
+ if ((cur_spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
+ spi->sync_period = 0;
+ if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
+ spi->sync_offset = cur_spi->sync_offset;
+ if ((cur_spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
+ spi->sync_offset = 0;
+ if ((spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
+ spi->ppr_options = cur_spi->ppr_options;
+ if ((cur_spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
+ spi->ppr_options = 0;
+ if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
+ spi->bus_width = cur_spi->bus_width;
+ if ((cur_spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
+ spi->bus_width = 0;
+ if ((spi->valid & CTS_SPI_VALID_DISC) == 0) {
+ spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
+ spi->flags |= cur_spi->flags & CTS_SPI_FLAGS_DISC_ENB;
+ }
+ if ((cur_spi->valid & CTS_SPI_VALID_DISC) == 0)
+ spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
+ && (inq_data->flags & SID_Sync) == 0
+ && cts->type == CTS_TYPE_CURRENT_SETTINGS)
+ || ((cpi.hba_inquiry & PI_SDTR_ABLE) == 0)) {
+ /* Force async */
+ spi->sync_period = 0;
+ spi->sync_offset = 0;
+ }
+
+ switch (spi->bus_width) {
+ case MSG_EXT_WDTR_BUS_32_BIT:
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
+ || (inq_data->flags & SID_WBus32) != 0
+ || cts->type == CTS_TYPE_USER_SETTINGS)
+ && (cpi.hba_inquiry & PI_WIDE_32) != 0)
+ break;
+ /* Fall Through to 16-bit */
+ case MSG_EXT_WDTR_BUS_16_BIT:
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
+ || (inq_data->flags & SID_WBus16) != 0
+ || cts->type == CTS_TYPE_USER_SETTINGS)
+ && (cpi.hba_inquiry & PI_WIDE_16) != 0) {
+ spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
+ break;
+ }
+ /* Fall Through to 8-bit */
+ default: /* New bus width?? */
+ case MSG_EXT_WDTR_BUS_8_BIT:
+ /* All targets can do this */
+ spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
+ break;
+ }
+
+ spi3caps = cpi.xport_specific.spi.ppr_options;
+ if ((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
+ && cts->type == CTS_TYPE_CURRENT_SETTINGS)
+ spi3caps &= inq_data->spi3data;
+
+ if ((spi3caps & SID_SPI_CLOCK_DT) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_DT_REQ;
+
+ if ((spi3caps & SID_SPI_IUS) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_IU_REQ;
+
+ if ((spi3caps & SID_SPI_QAS) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_QAS_REQ;
+
+ /* No SPI Transfer settings are allowed unless we are wide */
+ if (spi->bus_width == 0)
+ spi->ppr_options = 0;
+
+ if ((spi->valid & CTS_SPI_VALID_DISC)
+ && ((spi->flags & CTS_SPI_FLAGS_DISC_ENB) == 0)) {
+ /*
+ * Can't tag queue without disconnection.
+ */
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ scsi->valid |= CTS_SCSI_VALID_TQ;
+ }
+
+ /*
+ * If we are currently performing tagged transactions to
+ * this device and want to change its negotiation parameters,
+ * go non-tagged for a bit to give the controller a chance to
+ * negotiate unhampered by tag messages.
+ */
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS
+ && (device->inq_flags & SID_CmdQue) != 0
+ && (scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
+ && (spi->flags & (CTS_SPI_VALID_SYNC_RATE|
+ CTS_SPI_VALID_SYNC_OFFSET|
+ CTS_SPI_VALID_BUS_WIDTH)) != 0)
+ scsi_toggle_tags(cts->ccb_h.path);
+ }
+
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS
+ && (scsi->valid & CTS_SCSI_VALID_TQ) != 0) {
+ int device_tagenb;
+
+ /*
+ * If we are transitioning from tags to no-tags or
+ * vice-versa, we need to carefully freeze and restart
+ * the queue so that we don't overlap tagged and non-tagged
+ * commands. We also temporarily stop tags if there is
+ * a change in transfer negotiation settings to allow
+ * "tag-less" negotiation.
+ */
+ if ((device->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
+ || (device->inq_flags & SID_CmdQue) != 0)
+ device_tagenb = TRUE;
+ else
+ device_tagenb = FALSE;
+
+ if (((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
+ && device_tagenb == FALSE)
+ || ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) == 0
+ && device_tagenb == TRUE)) {
+
+ if ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0) {
+ /*
+ * Delay change to use tags until after a
+ * few commands have gone to this device so
+ * the controller has time to perform transfer
+ * negotiations without tagged messages getting
+ * in the way.
+ */
+ device->tag_delay_count = CAM_TAG_DELAY_COUNT;
+ device->flags |= CAM_DEV_TAG_AFTER_COUNT;
+ } else {
+ struct ccb_relsim crs;
+
+ xpt_freeze_devq(cts->ccb_h.path, /*count*/1);
+ device->inq_flags &= ~SID_CmdQue;
+ xpt_dev_ccbq_resize(cts->ccb_h.path,
+ sim->max_dev_openings);
+ device->flags &= ~CAM_DEV_TAG_AFTER_COUNT;
+ device->tag_delay_count = 0;
+
+ xpt_setup_ccb(&crs.ccb_h, cts->ccb_h.path,
+ /*priority*/1);
+ crs.ccb_h.func_code = XPT_REL_SIMQ;
+ crs.release_flags = RELSIM_RELEASE_AFTER_QEMPTY;
+ crs.openings
+ = crs.release_timeout
+ = crs.qfrozen_cnt
+ = 0;
+ xpt_action((union ccb *)&crs);
+ }
+ }
+ }
+ if (async_update == FALSE)
+ (*(sim->sim_action))(sim, (union ccb *)cts);
+}
+
+static void
+scsi_toggle_tags(struct cam_path *path)
+{
+ struct cam_ed *dev;
+
+ /*
+ * Give controllers a chance to renegotiate
+ * before starting tag operations. We
+ * "toggle" tagged queuing off then on
+ * which causes the tag enable command delay
+ * counter to come into effect.
+ */
+ dev = path->device;
+ if ((dev->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
+ || ((dev->inq_flags & SID_CmdQue) != 0
+ && (dev->inq_flags & (SID_Sync|SID_WBus16|SID_WBus32)) != 0)) {
+ struct ccb_trans_settings cts;
+
+ xpt_setup_ccb(&cts.ccb_h, path, 1);
+ cts.protocol = PROTO_SCSI;
+ cts.protocol_version = PROTO_VERSION_UNSPECIFIED;
+ cts.transport = XPORT_UNSPECIFIED;
+ cts.transport_version = XPORT_VERSION_UNSPECIFIED;
+ cts.proto_specific.scsi.flags = 0;
+ cts.proto_specific.scsi.valid = CTS_SCSI_VALID_TQ;
+ scsi_set_transfer_settings(&cts, path->device,
+ /*async_update*/TRUE);
+ cts.proto_specific.scsi.flags = CTS_SCSI_FLAGS_TAG_ENB;
+ scsi_set_transfer_settings(&cts, path->device,
+ /*async_update*/TRUE);
+ }
+}
+
+/*
+ * Handle any per-device event notifications that require action by the XPT.
+ */
+static void
+ata_dev_async(u_int32_t async_code, struct cam_eb *bus, struct cam_et *target,
+ struct cam_ed *device, void *async_arg)
+{
+ cam_status status;
+ struct cam_path newpath;
+
+ /*
+ * We only need to handle events for real devices.
+ */
+ if (target->target_id == CAM_TARGET_WILDCARD
+ || device->lun_id == CAM_LUN_WILDCARD)
+ return;
+
+ /*
+ * We need our own path with wildcards expanded to
+ * handle certain types of events.
+ */
+ if ((async_code == AC_SENT_BDR)
+ || (async_code == AC_BUS_RESET)
+ || (async_code == AC_INQ_CHANGED))
+ status = xpt_compile_path(&newpath, NULL,
+ bus->path_id,
+ target->target_id,
+ device->lun_id);
+ else
+ status = CAM_REQ_CMP_ERR;
+
+ if (status == CAM_REQ_CMP) {
+
+ /*
+ * Allow transfer negotiation to occur in a
+ * tag free environment.
+ */
+ if (async_code == AC_SENT_BDR
+ || async_code == AC_BUS_RESET)
+ scsi_toggle_tags(&newpath);
+
+ if (async_code == AC_INQ_CHANGED) {
+ /*
+ * We've sent a start unit command, or
+ * something similar to a device that
+ * may have caused its inquiry data to
+ * change. So we re-scan the device to
+ * refresh the inquiry data for it.
+ */
+ ata_scan_lun(newpath.periph, &newpath,
+ CAM_EXPECT_INQ_CHANGE, NULL);
+ }
+ xpt_release_path(&newpath);
+ } else if (async_code == AC_LOST_DEVICE) {
+ device->flags |= CAM_DEV_UNCONFIGURED;
+ } else if (async_code == AC_TRANSFER_NEG) {
+ struct ccb_trans_settings *settings;
+
+ settings = (struct ccb_trans_settings *)async_arg;
+ scsi_set_transfer_settings(settings, device,
+ /*async_update*/TRUE);
+ }
+}
+
diff --git a/sys/cam/cam.c b/sys/cam/cam.c
index ce6891d80563..120050a52772 100644
--- a/sys/cam/cam.c
+++ b/sys/cam/cam.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
#ifdef _KERNEL
#include <sys/libkern.h>
+#include <cam/cam_queue.h>
#include <cam/cam_xpt.h>
#endif
@@ -81,6 +82,7 @@ const struct cam_status_entry cam_status_table[] = {
{ CAM_UNREC_HBA_ERROR, "Unrecoverable Host Bus Adapter Error" },
{ CAM_REQ_TOO_BIG, "The request was too large for this host" },
{ CAM_REQUEUE_REQ, "Unconditionally Re-queue Request", },
+ { CAM_ATA_STATUS_ERROR, "ATA Status Error" },
{ CAM_IDE, "Initiator Detected Error Message Received" },
{ CAM_RESRC_UNAVAIL, "Resource Unavailable" },
{ CAM_UNACKED_EVENT, "Unacknowledged Event by Host" },
diff --git a/sys/cam/cam.h b/sys/cam/cam.h
index 14bbb8f3b559..36ad88a77cf3 100644
--- a/sys/cam/cam.h
+++ b/sys/cam/cam.h
@@ -129,6 +129,7 @@ typedef enum {
* requests for the target at the sim level
* back into the XPT queue.
*/
+ CAM_ATA_STATUS_ERROR, /* ATA error, look at error code in CCB */
CAM_SCSI_IT_NEXUS_LOST, /* Initiator/Target Nexus lost. */
CAM_IDE = 0x33, /* Initiator Detected Error */
CAM_RESRC_UNAVAIL, /* Resource Unavailable */
diff --git a/sys/cam/cam_ccb.h b/sys/cam/cam_ccb.h
index 33799fb897c2..5f10cc67c6d0 100644
--- a/sys/cam/cam_ccb.h
+++ b/sys/cam/cam_ccb.h
@@ -40,6 +40,7 @@
#endif
#include <cam/cam_debug.h>
#include <cam/scsi/scsi_all.h>
+#include <cam/ata/ata_all.h>
/* General allocation length definitions for CCB structures */
@@ -169,6 +170,8 @@ typedef enum {
* a device give the sector size and
* volume size.
*/
+ XPT_ATA_IO = 0x18 | XPT_FC_DEV_QUEUED,
+ /* Execute the requested ATA I/O operation */
/* HBA engine commands 0x20->0x2F */
XPT_ENG_INQ = 0x20 | XPT_FC_XPT_ONLY,
@@ -213,6 +216,7 @@ typedef enum {
PROTO_SCSI, /* Small Computer System Interface */
PROTO_ATA, /* AT Attachment */
PROTO_ATAPI, /* AT Attachment Packetized Interface */
+ PROTO_SATAPM, /* SATA Port Multiplier */
} cam_proto;
typedef enum {
@@ -225,6 +229,7 @@ typedef enum {
XPORT_PPB, /* Parallel Port Bus */
XPORT_ATA, /* AT Attachment */
XPORT_SAS, /* Serial Attached SCSI */
+ XPORT_SATA, /* Serial AT Attachment */
} cam_xport;
#define PROTO_VERSION_UNKNOWN (UINT_MAX - 1)
@@ -284,7 +289,9 @@ struct ccb_hdr {
/* Get Device Information CCB */
struct ccb_getdev {
struct ccb_hdr ccb_h;
+ cam_proto protocol;
struct scsi_inquiry_data inq_data;
+ struct ata_params ident_data;
u_int8_t serial_num[252];
u_int8_t reserved;
u_int8_t serial_num_len;
@@ -412,7 +419,9 @@ struct device_match_result {
path_id_t path_id;
target_id_t target_id;
lun_id_t target_lun;
+ cam_proto protocol;
struct scsi_inquiry_data inq_data;
+ struct ata_params ident_data;
dev_result_flags flags;
};
@@ -495,6 +504,7 @@ typedef enum {
PI_WIDE_16 = 0x20, /* Supports 16 bit wide SCSI */
PI_SDTR_ABLE = 0x10, /* Supports SDTR message */
PI_LINKED_CDB = 0x08, /* Supports linked CDBs */
+ PI_SATAPM = 0x04, /* Supports SATA PM */
PI_TAG_ABLE = 0x02, /* Supports tag queue messages */
PI_SOFT_RST = 0x01 /* Supports soft reset alternative */
} pi_inqflag;
@@ -562,6 +572,7 @@ struct ccb_pathinq {
struct ccb_pathinq_settings_sas sas;
char ccb_pathinq_settings_opaque[PATHINQ_SETTINGS_SIZE];
} xport_specific;
+ u_int maxio; /* Max supported I/O size, in bytes. */
};
/* Path Statistics CCB */
@@ -617,6 +628,28 @@ struct ccb_scsiio {
u_int init_id; /* initiator id of who selected */
};
+/*
+ * ATA I/O Request CCB used for the XPT_ATA_IO function code.
+ */
+struct ccb_ataio {
+ struct ccb_hdr ccb_h;
+ union ccb *next_ccb; /* Ptr for next CCB for action */
+ struct ata_cmd cmd; /* ATA command register set */
+ struct ata_res res; /* ATA result register set */
+ u_int8_t *data_ptr; /* Ptr to the data buf/SG list */
+ u_int32_t dxfer_len; /* Data transfer length */
+ u_int32_t resid; /* Transfer residual length: 2's comp */
+ u_int8_t tag_action; /* What to do for tag queueing */
+ /*
+ * The tag action should be either the define below (to send a
+ * non-tagged transaction) or one of the defined scsi tag messages
+ * from scsi_message.h.
+ */
+#define CAM_TAG_ACTION_NONE 0x00
+ u_int tag_id; /* tag id from initator (target mode) */
+ u_int init_id; /* initiator id of who selected */
+};
+
struct ccb_accept_tio {
struct ccb_hdr ccb_h;
cdb_t cdb_io; /* Union for CDB bytes/pointer */
@@ -746,6 +779,13 @@ struct ccb_trans_settings_sas {
u_int32_t bitrate; /* Mbps */
};
+struct ccb_trans_settings_sata {
+ u_int valid; /* Which fields to honor */
+#define CTS_SATA_VALID_SPEED 0x01
+#define CTS_SATA_VALID_PM 0x02
+ u_int32_t bitrate; /* Mbps */
+ u_int pm_present; /* PM is present (XPT->SIM) */
+};
/* Get/Set transfer rate/width/disconnection/tag queueing settings */
struct ccb_trans_settings {
@@ -764,6 +804,7 @@ struct ccb_trans_settings {
struct ccb_trans_settings_spi spi;
struct ccb_trans_settings_fc fc;
struct ccb_trans_settings_sas sas;
+ struct ccb_trans_settings_sata sata;
} xport_specific;
};
@@ -907,6 +948,7 @@ union ccb {
struct ccb_eng_exec cee;
struct ccb_rescan crcn;
struct ccb_debug cdbg;
+ struct ccb_ataio ataio;
};
__BEGIN_DECLS
@@ -924,7 +966,14 @@ cam_fill_ctio(struct ccb_scsiio *csio, u_int32_t retries,
u_int32_t flags, u_int tag_action, u_int tag_id,
u_int init_id, u_int scsi_status, u_int8_t *data_ptr,
u_int32_t dxfer_len, u_int32_t timeout);
-
+
+static __inline void
+cam_fill_ataio(struct ccb_ataio *ataio, u_int32_t retries,
+ void (*cbfcnp)(struct cam_periph *, union ccb *),
+ u_int32_t flags, u_int tag_action,
+ u_int8_t *data_ptr, u_int32_t dxfer_len,
+ u_int32_t timeout);
+
static __inline void
cam_fill_csio(struct ccb_scsiio *csio, u_int32_t retries,
void (*cbfcnp)(struct cam_periph *, union ccb *),
@@ -965,6 +1014,23 @@ cam_fill_ctio(struct ccb_scsiio *csio, u_int32_t retries,
csio->init_id = init_id;
}
+static __inline void
+cam_fill_ataio(struct ccb_ataio *ataio, u_int32_t retries,
+ void (*cbfcnp)(struct cam_periph *, union ccb *),
+ u_int32_t flags, u_int tag_action,
+ u_int8_t *data_ptr, u_int32_t dxfer_len,
+ u_int32_t timeout)
+{
+ ataio->ccb_h.func_code = XPT_ATA_IO;
+ ataio->ccb_h.flags = flags;
+ ataio->ccb_h.retry_count = retries;
+ ataio->ccb_h.cbfcnp = cbfcnp;
+ ataio->ccb_h.timeout = timeout;
+ ataio->data_ptr = data_ptr;
+ ataio->dxfer_len = dxfer_len;
+ ataio->tag_action = tag_action;
+}
+
void cam_calc_geometry(struct ccb_calc_geometry *ccg, int extended);
__END_DECLS
diff --git a/sys/cam/cam_periph.c b/sys/cam/cam_periph.c
index 468354ea099d..c4e0b0412ce3 100644
--- a/sys/cam/cam_periph.c
+++ b/sys/cam/cam_periph.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
#include <cam/cam.h>
#include <cam/cam_ccb.h>
+#include <cam/cam_queue.h>
#include <cam/cam_xpt_periph.h>
#include <cam/cam_periph.h>
#include <cam/cam_debug.h>
@@ -570,6 +571,8 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
u_int8_t **data_ptrs[CAM_PERIPH_MAXMAPS];
u_int32_t lengths[CAM_PERIPH_MAXMAPS];
u_int32_t dirs[CAM_PERIPH_MAXMAPS];
+ /* Some controllers may not be able to handle more data. */
+ size_t maxmap = DFLTPHYS;
switch(ccb->ccb_h.func_code) {
case XPT_DEV_MATCH:
@@ -592,6 +595,11 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
dirs[0] = CAM_DIR_IN;
numbufs = 1;
}
+ /*
+ * This request will not go to the hardware, no reason
+ * to be so strict. vmapbuf() is able to map up to MAXPHYS.
+ */
+ maxmap = MAXPHYS;
break;
case XPT_SCSI_IO:
case XPT_CONT_TARGET_IO:
@@ -603,6 +611,15 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK;
numbufs = 1;
break;
+ case XPT_ATA_IO:
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) == CAM_DIR_NONE)
+ return(0);
+
+ data_ptrs[0] = &ccb->ataio.data_ptr;
+ lengths[0] = ccb->ataio.dxfer_len;
+ dirs[0] = ccb->ccb_h.flags & CAM_DIR_MASK;
+ numbufs = 1;
+ break;
default:
return(EINVAL);
break; /* NOTREACHED */
@@ -625,12 +642,12 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
* boundary.
*/
if ((lengths[i] +
- (((vm_offset_t)(*data_ptrs[i])) & PAGE_MASK)) > DFLTPHYS){
+ (((vm_offset_t)(*data_ptrs[i])) & PAGE_MASK)) > maxmap){
printf("cam_periph_mapmem: attempt to map %lu bytes, "
- "which is greater than DFLTPHYS(%d)\n",
+ "which is greater than %lu\n",
(long)(lengths[i] +
(((vm_offset_t)(*data_ptrs[i])) & PAGE_MASK)),
- DFLTPHYS);
+ (u_long)maxmap);
return(E2BIG);
}
@@ -662,7 +679,7 @@ cam_periph_mapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
/* put our pointer in the data slot */
mapinfo->bp[i]->b_data = *data_ptrs[i];
- /* set the transfer length, we know it's < DFLTPHYS */
+ /* set the transfer length, we know it's < MAXPHYS */
mapinfo->bp[i]->b_bufsize = lengths[i];
/* set the direction */
@@ -738,6 +755,10 @@ cam_periph_unmapmem(union ccb *ccb, struct cam_periph_map_info *mapinfo)
data_ptrs[0] = &ccb->csio.data_ptr;
numbufs = min(mapinfo->num_bufs_used, 1);
break;
+ case XPT_ATA_IO:
+ data_ptrs[0] = &ccb->ataio.data_ptr;
+ numbufs = min(mapinfo->num_bufs_used, 1);
+ break;
default:
/* allow ourselves to be swapped once again */
PRELE(curproc);
@@ -1583,6 +1604,13 @@ cam_periph_error(union ccb *ccb, cam_flags camflags,
xpt_print(ccb->ccb_h.path, "AutoSense Failed\n");
error = EIO; /* we have to kill the command */
break;
+ case CAM_ATA_STATUS_ERROR:
+ if (bootverbose && printed == 0) {
+ xpt_print(ccb->ccb_h.path,
+ "Request completed with CAM_ATA_STATUS_ERROR\n");
+ printed++;
+ }
+ /* FALLTHROUGH */
case CAM_REQ_CMP_ERR:
if (bootverbose && printed == 0) {
xpt_print(ccb->ccb_h.path,
diff --git a/sys/cam/cam_xpt.c b/sys/cam/cam_xpt.c
index 08286db2277d..fe759e95de25 100644
--- a/sys/cam/cam_xpt.c
+++ b/sys/cam/cam_xpt.c
@@ -56,10 +56,12 @@ __FBSDID("$FreeBSD$");
#include <cam/cam.h>
#include <cam/cam_ccb.h>
#include <cam/cam_periph.h>
+#include <cam/cam_queue.h>
#include <cam/cam_sim.h>
#include <cam/cam_xpt.h>
#include <cam/cam_xpt_sim.h>
#include <cam/cam_xpt_periph.h>
+#include <cam/cam_xpt_internal.h>
#include <cam/cam_debug.h>
#include <cam/scsi/scsi_all.h>
@@ -68,31 +70,6 @@ __FBSDID("$FreeBSD$");
#include <machine/stdarg.h> /* for xpt_print below */
#include "opt_cam.h"
-/* Datastructures internal to the xpt layer */
-MALLOC_DEFINE(M_CAMXPT, "CAM XPT", "CAM XPT buffers");
-
-/* Object for defering XPT actions to a taskqueue */
-struct xpt_task {
- struct task task;
- void *data1;
- uintptr_t data2;
-};
-
-/*
- * Definition of an async handler callback block. These are used to add
- * SIMs and peripherals to the async callback lists.
- */
-struct async_node {
- SLIST_ENTRY(async_node) links;
- u_int32_t event_enable; /* Async Event enables */
- void (*callback)(void *arg, u_int32_t code,
- struct cam_path *path, void *args);
- void *callback_arg;
-};
-
-SLIST_HEAD(async_list, async_node);
-SLIST_HEAD(periph_list, cam_periph);
-
/*
* This is the maximum number of high powered commands (e.g. start unit)
* that can be outstanding at a particular time.
@@ -101,148 +78,16 @@ SLIST_HEAD(periph_list, cam_periph);
#define CAM_MAX_HIGHPOWER 4
#endif
-/*
- * Structure for queueing a device in a run queue.
- * There is one run queue for allocating new ccbs,
- * and another for sending ccbs to the controller.
- */
-struct cam_ed_qinfo {
- cam_pinfo pinfo;
- struct cam_ed *device;
-};
-
-/*
- * The CAM EDT (Existing Device Table) contains the device information for
- * all devices for all busses in the system. The table contains a
- * cam_ed structure for each device on the bus.
- */
-struct cam_ed {
- TAILQ_ENTRY(cam_ed) links;
- struct cam_ed_qinfo alloc_ccb_entry;
- struct cam_ed_qinfo send_ccb_entry;
- struct cam_et *target;
- struct cam_sim *sim;
- lun_id_t lun_id;
- struct camq drvq; /*
- * Queue of type drivers wanting to do
- * work on this device.
- */
- struct cam_ccbq ccbq; /* Queue of pending ccbs */
- struct async_list asyncs; /* Async callback info for this B/T/L */
- struct periph_list periphs; /* All attached devices */
- u_int generation; /* Generation number */
- struct cam_periph *owner; /* Peripheral driver's ownership tag */
- struct xpt_quirk_entry *quirk; /* Oddities about this device */
- /* Storage for the inquiry data */
- cam_proto protocol;
- u_int protocol_version;
- cam_xport transport;
- u_int transport_version;
- struct scsi_inquiry_data inq_data;
- u_int8_t inq_flags; /*
- * Current settings for inquiry flags.
- * This allows us to override settings
- * like disconnection and tagged
- * queuing for a device.
- */
- u_int8_t queue_flags; /* Queue flags from the control page */
- u_int8_t serial_num_len;
- u_int8_t *serial_num;
- u_int32_t qfrozen_cnt;
- u_int32_t flags;
-#define CAM_DEV_UNCONFIGURED 0x01
-#define CAM_DEV_REL_TIMEOUT_PENDING 0x02
-#define CAM_DEV_REL_ON_COMPLETE 0x04
-#define CAM_DEV_REL_ON_QUEUE_EMPTY 0x08
-#define CAM_DEV_RESIZE_QUEUE_NEEDED 0x10
-#define CAM_DEV_TAG_AFTER_COUNT 0x20
-#define CAM_DEV_INQUIRY_DATA_VALID 0x40
-#define CAM_DEV_IN_DV 0x80
-#define CAM_DEV_DV_HIT_BOTTOM 0x100
- u_int32_t tag_delay_count;
-#define CAM_TAG_DELAY_COUNT 5
- u_int32_t tag_saved_openings;
- u_int32_t refcount;
- struct callout callout;
-};
-
-/*
- * Each target is represented by an ET (Existing Target). These
- * entries are created when a target is successfully probed with an
- * identify, and removed when a device fails to respond after a number
- * of retries, or a bus rescan finds the device missing.
- */
-struct cam_et {
- TAILQ_HEAD(, cam_ed) ed_entries;
- TAILQ_ENTRY(cam_et) links;
- struct cam_eb *bus;
- target_id_t target_id;
- u_int32_t refcount;
- u_int generation;
- struct timeval last_reset;
-};
-
-/*
- * Each bus is represented by an EB (Existing Bus). These entries
- * are created by calls to xpt_bus_register and deleted by calls to
- * xpt_bus_deregister.
- */
-struct cam_eb {
- TAILQ_HEAD(, cam_et) et_entries;
- TAILQ_ENTRY(cam_eb) links;
- path_id_t path_id;
- struct cam_sim *sim;
- struct timeval last_reset;
- u_int32_t flags;
-#define CAM_EB_RUNQ_SCHEDULED 0x01
- u_int32_t refcount;
- u_int generation;
- device_t parent_dev;
-};
-
-struct cam_path {
- struct cam_periph *periph;
- struct cam_eb *bus;
- struct cam_et *target;
- struct cam_ed *device;
-};
+/* Datastructures internal to the xpt layer */
+MALLOC_DEFINE(M_CAMXPT, "CAM XPT", "CAM XPT buffers");
-struct xpt_quirk_entry {
- struct scsi_inquiry_pattern inq_pat;
- u_int8_t quirks;
-#define CAM_QUIRK_NOLUNS 0x01
-#define CAM_QUIRK_NOSERIAL 0x02
-#define CAM_QUIRK_HILUNS 0x04
-#define CAM_QUIRK_NOHILUNS 0x08
- u_int mintags;
- u_int maxtags;
+/* Object for defering XPT actions to a taskqueue */
+struct xpt_task {
+ struct task task;
+ void *data1;
+ uintptr_t data2;
};
-static int cam_srch_hi = 0;
-TUNABLE_INT("kern.cam.cam_srch_hi", &cam_srch_hi);
-static int sysctl_cam_search_luns(SYSCTL_HANDLER_ARGS);
-SYSCTL_PROC(_kern_cam, OID_AUTO, cam_srch_hi, CTLTYPE_INT|CTLFLAG_RW, 0, 0,
- sysctl_cam_search_luns, "I",
- "allow search above LUN 7 for SCSI3 and greater devices");
-
-#define CAM_SCSI2_MAXLUN 8
-/*
- * If we're not quirked to search <= the first 8 luns
- * and we are either quirked to search above lun 8,
- * or we're > SCSI-2 and we've enabled hilun searching,
- * or we're > SCSI-2 and the last lun was a success,
- * we can look for luns above lun 8.
- */
-#define CAN_SRCH_HI_SPARSE(dv) \
- (((dv->quirk->quirks & CAM_QUIRK_NOHILUNS) == 0) \
- && ((dv->quirk->quirks & CAM_QUIRK_HILUNS) \
- || (SID_ANSI_REV(&dv->inq_data) > SCSI_REV_2 && cam_srch_hi)))
-
-#define CAN_SRCH_HI_DENSE(dv) \
- (((dv->quirk->quirks & CAM_QUIRK_NOHILUNS) == 0) \
- && ((dv->quirk->quirks & CAM_QUIRK_HILUNS) \
- || (SID_ANSI_REV(&dv->inq_data) > SCSI_REV_2)))
-
typedef enum {
XPT_FLAG_OPEN = 0x01
} xpt_flags;
@@ -268,359 +113,6 @@ struct xpt_softc {
struct mtx xpt_lock;
};
-static const char quantum[] = "QUANTUM";
-static const char sony[] = "SONY";
-static const char west_digital[] = "WDIGTL";
-static const char samsung[] = "SAMSUNG";
-static const char seagate[] = "SEAGATE";
-static const char microp[] = "MICROP";
-
-static struct xpt_quirk_entry xpt_quirk_table[] =
-{
- {
- /* Reports QUEUE FULL for temporary resource shortages */
- { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP39100*", "*" },
- /*quirks*/0, /*mintags*/24, /*maxtags*/32
- },
- {
- /* Reports QUEUE FULL for temporary resource shortages */
- { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP34550*", "*" },
- /*quirks*/0, /*mintags*/24, /*maxtags*/32
- },
- {
- /* Reports QUEUE FULL for temporary resource shortages */
- { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP32275*", "*" },
- /*quirks*/0, /*mintags*/24, /*maxtags*/32
- },
- {
- /* Broken tagged queuing drive */
- { T_DIRECT, SIP_MEDIA_FIXED, microp, "4421-07*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Broken tagged queuing drive */
- { T_DIRECT, SIP_MEDIA_FIXED, "HP", "C372*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Broken tagged queuing drive */
- { T_DIRECT, SIP_MEDIA_FIXED, microp, "3391*", "x43h" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Unfortunately, the Quantum Atlas III has the same
- * problem as the Atlas II drives above.
- * Reported by: "Johan Granlund" <johan@granlund.nu>
- *
- * For future reference, the drive with the problem was:
- * QUANTUM QM39100TD-SW N1B0
- *
- * It's possible that Quantum will fix the problem in later
- * firmware revisions. If that happens, the quirk entry
- * will need to be made specific to the firmware revisions
- * with the problem.
- *
- */
- /* Reports QUEUE FULL for temporary resource shortages */
- { T_DIRECT, SIP_MEDIA_FIXED, quantum, "QM39100*", "*" },
- /*quirks*/0, /*mintags*/24, /*maxtags*/32
- },
- {
- /*
- * 18 Gig Atlas III, same problem as the 9G version.
- * Reported by: Andre Albsmeier
- * <andre.albsmeier@mchp.siemens.de>
- *
- * For future reference, the drive with the problem was:
- * QUANTUM QM318000TD-S N491
- */
- /* Reports QUEUE FULL for temporary resource shortages */
- { T_DIRECT, SIP_MEDIA_FIXED, quantum, "QM318000*", "*" },
- /*quirks*/0, /*mintags*/24, /*maxtags*/32
- },
- {
- /*
- * Broken tagged queuing drive
- * Reported by: Bret Ford <bford@uop.cs.uop.edu>
- * and: Martin Renters <martin@tdc.on.ca>
- */
- { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST410800*", "71*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- /*
- * The Seagate Medalist Pro drives have very poor write
- * performance with anything more than 2 tags.
- *
- * Reported by: Paul van der Zwan <paulz@trantor.xs4all.nl>
- * Drive: <SEAGATE ST36530N 1444>
- *
- * Reported by: Jeremy Lea <reg@shale.csir.co.za>
- * Drive: <SEAGATE ST34520W 1281>
- *
- * No one has actually reported that the 9G version
- * (ST39140*) of the Medalist Pro has the same problem, but
- * we're assuming that it does because the 4G and 6.5G
- * versions of the drive are broken.
- */
- {
- { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST34520*", "*"},
- /*quirks*/0, /*mintags*/2, /*maxtags*/2
- },
- {
- { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST36530*", "*"},
- /*quirks*/0, /*mintags*/2, /*maxtags*/2
- },
- {
- { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST39140*", "*"},
- /*quirks*/0, /*mintags*/2, /*maxtags*/2
- },
- {
- /*
- * Slow when tagged queueing is enabled. Write performance
- * steadily drops off with more and more concurrent
- * transactions. Best sequential write performance with
- * tagged queueing turned off and write caching turned on.
- *
- * PR: kern/10398
- * Submitted by: Hideaki Okada <hokada@isl.melco.co.jp>
- * Drive: DCAS-34330 w/ "S65A" firmware.
- *
- * The drive with the problem had the "S65A" firmware
- * revision, and has also been reported (by Stephen J.
- * Roznowski <sjr@home.net>) for a drive with the "S61A"
- * firmware revision.
- *
- * Although no one has reported problems with the 2 gig
- * version of the DCAS drive, the assumption is that it
- * has the same problems as the 4 gig version. Therefore
- * this quirk entries disables tagged queueing for all
- * DCAS drives.
- */
- { T_DIRECT, SIP_MEDIA_FIXED, "IBM", "DCAS*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Broken tagged queuing drive */
- { T_DIRECT, SIP_MEDIA_REMOVABLE, "iomega", "jaz*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Broken tagged queuing drive */
- { T_DIRECT, SIP_MEDIA_FIXED, "CONNER", "CFP2107*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /* This does not support other than LUN 0 */
- { T_DIRECT, SIP_MEDIA_FIXED, "VMware*", "*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/2, /*maxtags*/255
- },
- {
- /*
- * Broken tagged queuing drive.
- * Submitted by:
- * NAKAJI Hiroyuki <nakaji@zeisei.dpri.kyoto-u.ac.jp>
- * in PR kern/9535
- */
- { T_DIRECT, SIP_MEDIA_FIXED, samsung, "WN34324U*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Slow when tagged queueing is enabled. (1.5MB/sec versus
- * 8MB/sec.)
- * Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
- * Best performance with these drives is achieved with
- * tagged queueing turned off, and write caching turned on.
- */
- { T_DIRECT, SIP_MEDIA_FIXED, west_digital, "WDE*", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Slow when tagged queueing is enabled. (1.5MB/sec versus
- * 8MB/sec.)
- * Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
- * Best performance with these drives is achieved with
- * tagged queueing turned off, and write caching turned on.
- */
- { T_DIRECT, SIP_MEDIA_FIXED, west_digital, "ENTERPRISE", "*" },
- /*quirks*/0, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Doesn't handle queue full condition correctly,
- * so we need to limit maxtags to what the device
- * can handle instead of determining this automatically.
- */
- { T_DIRECT, SIP_MEDIA_FIXED, samsung, "WN321010S*", "*" },
- /*quirks*/0, /*mintags*/2, /*maxtags*/32
- },
- {
- /* Really only one LUN */
- { T_ENCLOSURE, SIP_MEDIA_FIXED, "SUN", "SENA", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* I can't believe we need a quirk for DPT volumes. */
- { T_ANY, SIP_MEDIA_FIXED|SIP_MEDIA_REMOVABLE, "DPT", "*", "*" },
- CAM_QUIRK_NOLUNS,
- /*mintags*/0, /*maxtags*/255
- },
- {
- /*
- * Many Sony CDROM drives don't like multi-LUN probing.
- */
- { T_CDROM, SIP_MEDIA_REMOVABLE, sony, "CD-ROM CDU*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * This drive doesn't like multiple LUN probing.
- * Submitted by: Parag Patel <parag@cgt.com>
- */
- { T_WORM, SIP_MEDIA_REMOVABLE, sony, "CD-R CDU9*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- { T_WORM, SIP_MEDIA_REMOVABLE, "YAMAHA", "CDR100*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * The 8200 doesn't like multi-lun probing, and probably
- * don't like serial number requests either.
- */
- {
- T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "EXABYTE",
- "EXB-8200*", "*"
- },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Let's try the same as above, but for a drive that says
- * it's an IPL-6860 but is actually an EXB 8200.
- */
- {
- T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "EXABYTE",
- "IPL-6860*", "*"
- },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * These Hitachi drives don't like multi-lun probing.
- * The PR submitter has a DK319H, but says that the Linux
- * kernel has a similar work-around for the DK312 and DK314,
- * so all DK31* drives are quirked here.
- * PR: misc/18793
- * Submitted by: Paul Haddad <paul@pth.com>
- */
- { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "DK31*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/2, /*maxtags*/255
- },
- {
- /*
- * The Hitachi CJ series with J8A8 firmware apparantly has
- * problems with tagged commands.
- * PR: 23536
- * Reported by: amagai@nue.org
- */
- { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "DK32CJ*", "J8A8" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * These are the large storage arrays.
- * Submitted by: William Carrel <william.carrel@infospace.com>
- */
- { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "OPEN*", "*" },
- CAM_QUIRK_HILUNS, 2, 1024
- },
- {
- /*
- * This old revision of the TDC3600 is also SCSI-1, and
- * hangs upon serial number probing.
- */
- {
- T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "TANDBERG",
- " TDC 3600", "U07:"
- },
- CAM_QUIRK_NOSERIAL, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Would repond to all LUNs if asked for.
- */
- {
- T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "CALIPER",
- "CP150", "*"
- },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /*
- * Would repond to all LUNs if asked for.
- */
- {
- T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "KENNEDY",
- "96X2*", "*"
- },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Submitted by: Matthew Dodd <winter@jurai.net> */
- { T_PROCESSOR, SIP_MEDIA_FIXED, "Cabletrn", "EA41*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Submitted by: Matthew Dodd <winter@jurai.net> */
- { T_PROCESSOR, SIP_MEDIA_FIXED, "CABLETRN", "EA41*", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* TeraSolutions special settings for TRC-22 RAID */
- { T_DIRECT, SIP_MEDIA_FIXED, "TERASOLU", "TRC-22", "*" },
- /*quirks*/0, /*mintags*/55, /*maxtags*/255
- },
- {
- /* Veritas Storage Appliance */
- { T_DIRECT, SIP_MEDIA_FIXED, "VERITAS", "*", "*" },
- CAM_QUIRK_HILUNS, /*mintags*/2, /*maxtags*/1024
- },
- {
- /*
- * Would respond to all LUNs. Device type and removable
- * flag are jumper-selectable.
- */
- { T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED, "MaxOptix",
- "Tahiti 1", "*"
- },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* EasyRAID E5A aka. areca ARC-6010 */
- { T_DIRECT, SIP_MEDIA_FIXED, "easyRAID", "*", "*" },
- CAM_QUIRK_NOHILUNS, /*mintags*/2, /*maxtags*/255
- },
- {
- { T_ENCLOSURE, SIP_MEDIA_FIXED, "DP", "BACKPLANE", "*" },
- CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
- },
- {
- /* Default tagged queuing parameters for all devices */
- {
- T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED,
- /*vendor*/"*", /*product*/"*", /*revision*/"*"
- },
- /*quirks*/0, /*mintags*/2, /*maxtags*/255
- },
-};
-
-static const int xpt_quirk_table_size =
- sizeof(xpt_quirk_table) / sizeof(*xpt_quirk_table);
-
typedef enum {
DM_RET_COPY = 0x01,
DM_RET_FLAG_MASK = 0x0f,
@@ -666,23 +158,13 @@ struct cam_periph *xpt_periph;
static periph_init_t xpt_periph_init;
-static periph_init_t probe_periph_init;
-
static struct periph_driver xpt_driver =
{
xpt_periph_init, "xpt",
TAILQ_HEAD_INITIALIZER(xpt_driver.units)
};
-static struct periph_driver probe_driver =
-{
- probe_periph_init, "probe",
- TAILQ_HEAD_INITIALIZER(probe_driver.units)
-};
-
PERIPHDRIVER_DECLARE(xpt, xpt_driver);
-PERIPHDRIVER_DECLARE(probe, probe_driver);
-
static d_open_t xptopen;
static d_close_t xptclose;
@@ -697,7 +179,6 @@ static struct cdevsw xpt_cdevsw = {
.d_name = "xpt",
};
-
/* Storage for debugging datastructures */
#ifdef CAMDEBUG
struct cam_path *cam_dpath;
@@ -705,28 +186,6 @@ u_int32_t cam_dflags;
u_int32_t cam_debug_delay;
#endif
-#if defined(CAM_DEBUG_FLAGS) && !defined(CAMDEBUG)
-#error "You must have options CAMDEBUG to use options CAM_DEBUG_FLAGS"
-#endif
-
-/*
- * In order to enable the CAM_DEBUG_* options, the user must have CAMDEBUG
- * enabled. Also, the user must have either none, or all of CAM_DEBUG_BUS,
- * CAM_DEBUG_TARGET, and CAM_DEBUG_LUN specified.
- */
-#if defined(CAM_DEBUG_BUS) || defined(CAM_DEBUG_TARGET) \
- || defined(CAM_DEBUG_LUN)
-#ifdef CAMDEBUG
-#if !defined(CAM_DEBUG_BUS) || !defined(CAM_DEBUG_TARGET) \
- || !defined(CAM_DEBUG_LUN)
-#error "You must define all or none of CAM_DEBUG_BUS, CAM_DEBUG_TARGET \
- and CAM_DEBUG_LUN"
-#endif /* !CAM_DEBUG_BUS || !CAM_DEBUG_TARGET || !CAM_DEBUG_LUN */
-#else /* !CAMDEBUG */
-#error "You must use options CAMDEBUG if you use the CAM_DEBUG_* options"
-#endif /* CAMDEBUG */
-#endif /* CAM_DEBUG_BUS || CAM_DEBUG_TARGET || CAM_DEBUG_LUN */
-
/* Our boot-time initialization hook */
static int cam_module_event_handler(module_t, int /*modeventtype_t*/, void *);
@@ -742,30 +201,14 @@ DECLARE_MODULE(cam, cam_moduledata, SI_SUB_CONFIGURE, SI_ORDER_SECOND);
MODULE_VERSION(cam, 1);
-static cam_status xpt_compile_path(struct cam_path *new_path,
- struct cam_periph *perph,
- path_id_t path_id,
- target_id_t target_id,
- lun_id_t lun_id);
-
-static void xpt_release_path(struct cam_path *path);
-
static void xpt_async_bcast(struct async_list *async_head,
u_int32_t async_code,
struct cam_path *path,
void *async_arg);
-static void xpt_dev_async(u_int32_t async_code,
- struct cam_eb *bus,
- struct cam_et *target,
- struct cam_ed *device,
- void *async_arg);
static path_id_t xptnextfreepathid(void);
static path_id_t xptpathid(const char *sim_name, int sim_unit, int sim_bus);
static union ccb *xpt_get_ccb(struct cam_ed *device);
-static int xpt_schedule_dev(struct camq *queue, cam_pinfo *dev_pinfo,
- u_int32_t new_priority);
static void xpt_run_dev_allocq(struct cam_eb *bus);
-static void xpt_run_dev_sendq(struct cam_eb *bus);
static timeout_t xpt_release_devq_timeout;
static void xpt_release_simq_timeout(void *arg) __unused;
static void xpt_release_bus(struct cam_eb *bus);
@@ -774,23 +217,14 @@ static void xpt_release_devq_device(struct cam_ed *dev, u_int count,
static struct cam_et*
xpt_alloc_target(struct cam_eb *bus, target_id_t target_id);
static void xpt_release_target(struct cam_eb *bus, struct cam_et *target);
-static struct cam_ed*
- xpt_alloc_device(struct cam_eb *bus, struct cam_et *target,
- lun_id_t lun_id);
static void xpt_release_device(struct cam_eb *bus, struct cam_et *target,
struct cam_ed *device);
-static u_int32_t xpt_dev_ccbq_resize(struct cam_path *path, int newopenings);
static struct cam_eb*
xpt_find_bus(path_id_t path_id);
static struct cam_et*
xpt_find_target(struct cam_eb *bus, target_id_t target_id);
static struct cam_ed*
xpt_find_device(struct cam_et *target, lun_id_t lun_id);
-static void xpt_scan_bus(struct cam_periph *periph, union ccb *ccb);
-static void xpt_scan_lun(struct cam_periph *periph,
- struct cam_path *path, cam_flags flags,
- union ccb *ccb);
-static void xptscandone(struct cam_periph *periph, union ccb *done_ccb);
static xpt_busfunc_t xptconfigbuscountfunc;
static xpt_busfunc_t xptconfigfunc;
static void xpt_config(void *arg);
@@ -840,30 +274,21 @@ static xpt_periphfunc_t xptdefperiphfunc;
static int xpt_for_all_busses(xpt_busfunc_t *tr_func, void *arg);
static int xpt_for_all_devices(xpt_devicefunc_t *tr_func,
void *arg);
+static void xpt_dev_async_default(u_int32_t async_code,
+ struct cam_eb *bus,
+ struct cam_et *target,
+ struct cam_ed *device,
+ void *async_arg);
+static struct cam_ed * xpt_alloc_device_default(struct cam_eb *bus,
+ struct cam_et *target,
+ lun_id_t lun_id);
static xpt_devicefunc_t xptsetasyncfunc;
static xpt_busfunc_t xptsetasyncbusfunc;
static cam_status xptregister(struct cam_periph *periph,
void *arg);
-static cam_status proberegister(struct cam_periph *periph,
- void *arg);
-static void probeschedule(struct cam_periph *probe_periph);
-static void probestart(struct cam_periph *periph, union ccb *start_ccb);
-static void proberequestdefaultnegotiation(struct cam_periph *periph);
-static int proberequestbackoff(struct cam_periph *periph,
- struct cam_ed *device);
-static void probedone(struct cam_periph *periph, union ccb *done_ccb);
-static void probecleanup(struct cam_periph *periph);
-static void xpt_find_quirk(struct cam_ed *device);
-static void xpt_devise_transport(struct cam_path *path);
-static void xpt_set_transfer_settings(struct ccb_trans_settings *cts,
- struct cam_ed *device,
- int async_update);
-static void xpt_toggle_tags(struct cam_path *path);
static void xpt_start_tags(struct cam_path *path);
static __inline int xpt_schedule_dev_allocq(struct cam_eb *bus,
struct cam_ed *dev);
-static __inline int xpt_schedule_dev_sendq(struct cam_eb *bus,
- struct cam_ed *dev);
static __inline int periph_is_queued(struct cam_periph *periph);
static __inline int device_is_alloc_queued(struct cam_ed *device);
static __inline int device_is_send_queued(struct cam_ed *device);
@@ -897,27 +322,6 @@ xpt_schedule_dev_allocq(struct cam_eb *bus, struct cam_ed *dev)
}
static __inline int
-xpt_schedule_dev_sendq(struct cam_eb *bus, struct cam_ed *dev)
-{
- int retval;
-
- if (dev->ccbq.dev_openings > 0) {
- /*
- * The priority of a device waiting for controller
- * resources is that of the the highest priority CCB
- * enqueued.
- */
- retval =
- xpt_schedule_dev(&bus->sim->devq->send_queue,
- &dev->send_ccb_entry.pinfo,
- CAMQ_GET_HEAD(&dev->ccbq.queue)->priority);
- } else {
- retval = 0;
- }
- return (retval);
-}
-
-static __inline int
periph_is_queued(struct cam_periph *periph)
{
return (periph->pinfo.index != CAM_UNQUEUED_INDEX);
@@ -955,12 +359,6 @@ xpt_periph_init()
}
static void
-probe_periph_init()
-{
-}
-
-
-static void
xptdone(struct cam_periph *periph, union ccb *done_ccb)
{
/* Caller will release the CCB */
@@ -1643,7 +1041,13 @@ xpt_announce_periph(struct cam_periph *periph, char *announce_string)
path->target->target_id,
path->device->lun_id);
printf("%s%d: ", periph->periph_name, periph->unit_number);
- scsi_print_inquiry(&path->device->inq_data);
+ if (path->device->protocol == PROTO_SCSI)
+ scsi_print_inquiry(&path->device->inq_data);
+ else if (path->device->protocol == PROTO_ATA ||
+ path->device->protocol == PROTO_SATAPM)
+ ata_print_ident(&path->device->ident_data);
+ else
+ printf("Unknown protocol device\n");
if (bootverbose && path->device->serial_num_len > 0) {
/* Don't wrap the screen - print only the first 60 chars */
printf("%s%d: Serial Number %.60s\n", periph->periph_name,
@@ -1677,19 +1081,20 @@ xpt_announce_periph(struct cam_periph *periph, char *announce_string)
if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) != 0)
speed *= (0x01 << spi->bus_width);
}
-
if (cts.ccb_h.status == CAM_REQ_CMP && cts.transport == XPORT_FC) {
struct ccb_trans_settings_fc *fc = &cts.xport_specific.fc;
- if (fc->valid & CTS_FC_VALID_SPEED) {
+ if (fc->valid & CTS_FC_VALID_SPEED)
speed = fc->bitrate;
- }
}
-
if (cts.ccb_h.status == CAM_REQ_CMP && cts.transport == XPORT_SAS) {
struct ccb_trans_settings_sas *sas = &cts.xport_specific.sas;
- if (sas->valid & CTS_SAS_VALID_SPEED) {
+ if (sas->valid & CTS_SAS_VALID_SPEED)
speed = sas->bitrate;
- }
+ }
+ if (cts.ccb_h.status == CAM_REQ_CMP && cts.transport == XPORT_SATA) {
+ struct ccb_trans_settings_sata *sata = &cts.xport_specific.sata;
+ if (sata->valid & CTS_SATA_VALID_SPEED)
+ speed = sata->bitrate;
}
mb = speed / 1000;
@@ -1738,7 +1143,7 @@ xpt_announce_periph(struct cam_periph *periph, char *announce_string)
if (path->device->inq_flags & SID_CmdQue
|| path->device->flags & CAM_DEV_TAG_AFTER_COUNT) {
- printf("\n%s%d: Command Queueing Enabled",
+ printf("\n%s%d: Command Queueing enabled",
periph->periph_name, periph->unit_number);
}
printf("\n");
@@ -2288,9 +1693,14 @@ xptedtdevicefunc(struct cam_ed *device, void *arg)
device->target->target_id;
cdm->matches[j].result.device_result.target_lun =
device->lun_id;
+ cdm->matches[j].result.device_result.protocol =
+ device->protocol;
bcopy(&device->inq_data,
&cdm->matches[j].result.device_result.inq_data,
sizeof(struct scsi_inquiry_data));
+ bcopy(&device->ident_data,
+ &cdm->matches[j].result.device_result.ident_data,
+ sizeof(struct ata_params));
/* Let the user know whether this device is unconfigured */
if (device->flags & CAM_DEV_UNCONFIGURED)
@@ -2990,6 +2400,15 @@ xpt_action(union ccb *start_ccb)
CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("xpt_action\n"));
start_ccb->ccb_h.status = CAM_REQ_INPROG;
+ (*(start_ccb->ccb_h.path->bus->xport->action))(start_ccb);
+}
+
+void
+xpt_action_default(union ccb *start_ccb)
+{
+
+ CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("xpt_action_default\n"));
+
switch (start_ccb->ccb_h.func_code) {
case XPT_SCSI_IO:
@@ -3039,6 +2458,10 @@ xpt_action(union ccb *start_ccb)
start_ccb->csio.sense_resid = 0;
start_ccb->csio.resid = 0;
/* FALLTHROUGH */
+ case XPT_ATA_IO:
+ if (start_ccb->ccb_h.func_code == XPT_ATA_IO) {
+ start_ccb->ataio.resid = 0;
+ }
case XPT_RESET_DEV:
case XPT_ENG_EXEC:
{
@@ -3056,13 +2479,6 @@ xpt_action(union ccb *start_ccb)
xpt_run_dev_sendq(path->bus);
break;
}
- case XPT_SET_TRAN_SETTINGS:
- {
- xpt_set_transfer_settings(&start_ccb->cts,
- start_ccb->ccb_h.path->device,
- /*async_update*/FALSE);
- break;
- }
case XPT_CALC_GEOMETRY:
{
struct cam_sim *sim;
@@ -3148,7 +2564,6 @@ xpt_action(union ccb *start_ccb)
case XPT_EN_LUN:
case XPT_IMMED_NOTIFY:
case XPT_NOTIFY_ACK:
- case XPT_GET_TRAN_SETTINGS:
case XPT_RESET_BUS:
{
struct cam_sim *sim;
@@ -3185,7 +2600,9 @@ xpt_action(union ccb *start_ccb)
cgd = &start_ccb->cgd;
bus = cgd->ccb_h.path->bus;
tar = cgd->ccb_h.path->target;
+ cgd->protocol = dev->protocol;
cgd->inq_data = dev->inq_data;
+ cgd->ident_data = dev->ident_data;
cgd->ccb_h.status = CAM_REQ_CMP;
cgd->serial_num_len = dev->serial_num_len;
if ((dev->serial_num_len > 0)
@@ -3216,8 +2633,8 @@ xpt_action(union ccb *start_ccb)
cgds->devq_queued = dev->ccbq.queue.entries;
cgds->held = dev->ccbq.held;
cgds->last_reset = tar->last_reset;
- cgds->maxtags = dev->quirk->maxtags;
- cgds->mintags = dev->quirk->mintags;
+ cgds->maxtags = dev->maxtags;
+ cgds->mintags = dev->mintags;
if (timevalcmp(&tar->last_reset, &bus->last_reset, <))
cgds->last_reset = bus->last_reset;
cgds->ccb_h.status = CAM_REQ_CMP;
@@ -3513,14 +2930,6 @@ xpt_action(union ccb *start_ccb)
start_ccb->ccb_h.status = CAM_REQ_CMP;
break;
}
- case XPT_SCAN_BUS:
- xpt_scan_bus(start_ccb->ccb_h.path->periph, start_ccb);
- break;
- case XPT_SCAN_LUN:
- xpt_scan_lun(start_ccb->ccb_h.path->periph,
- start_ccb->ccb_h.path, start_ccb->crcn.flags,
- start_ccb);
- break;
case XPT_DEBUG: {
#ifdef CAMDEBUG
#ifdef CAM_DEBUG_DELAY
@@ -3675,7 +3084,7 @@ xpt_schedule(struct cam_periph *perph, u_int32_t new_priority)
* started the queue, return 0 so the caller doesn't attempt
* to run the queue.
*/
-static int
+int
xpt_schedule_dev(struct camq *queue, cam_pinfo *pinfo,
u_int32_t new_priority)
{
@@ -3784,7 +3193,7 @@ xpt_run_dev_allocq(struct cam_eb *bus)
devq->alloc_queue.qfrozen_cnt--;
}
-static void
+void
xpt_run_dev_sendq(struct cam_eb *bus)
{
struct cam_devq *devq;
@@ -3993,7 +3402,7 @@ xpt_create_path_unlocked(struct cam_path **new_path_ptr,
return (status);
}
-static cam_status
+cam_status
xpt_compile_path(struct cam_path *new_path, struct cam_periph *perph,
path_id_t path_id, target_id_t target_id, lun_id_t lun_id)
{
@@ -4032,9 +3441,10 @@ xpt_compile_path(struct cam_path *new_path, struct cam_periph *perph,
/* Create one */
struct cam_ed *new_device;
- new_device = xpt_alloc_device(bus,
- target,
- lun_id);
+ new_device =
+ (*(bus->xport->alloc_device))(bus,
+ target,
+ lun_id);
if (new_device == NULL) {
status = CAM_RESRC_UNAVAIL;
} else {
@@ -4064,7 +3474,7 @@ xpt_compile_path(struct cam_path *new_path, struct cam_periph *perph,
return (status);
}
-static void
+void
xpt_release_path(struct cam_path *path)
{
CAM_DEBUG(path, CAM_DEBUG_TRACE, ("xpt_release_path\n"));
@@ -4306,6 +3716,12 @@ xpt_release_ccb(union ccb *free_ccb)
/* Functions accessed by SIM drivers */
+static struct xpt_xport xport_default = {
+ .alloc_device = xpt_alloc_device_default,
+ .action = xpt_action_default,
+ .async = xpt_dev_async_default,
+};
+
/*
* A sim structure, listing the SIM entry points and instance
* identification info is passed to xpt_bus_register to hook the SIM
@@ -4321,6 +3737,8 @@ xpt_bus_register(struct cam_sim *sim, device_t parent, u_int32_t bus)
struct cam_eb *new_bus;
struct cam_eb *old_bus;
struct ccb_pathinq cpi;
+ struct cam_path path;
+ cam_status status;
mtx_assert(sim->mtx, MA_OWNED);
@@ -4333,7 +3751,6 @@ xpt_bus_register(struct cam_sim *sim, device_t parent, u_int32_t bus)
}
if (strcmp(sim->sim_name, "xpt") != 0) {
-
sim->path_id =
xptpathid(sim->sim_name, sim->unit_number, sim->bus_id);
}
@@ -4346,6 +3763,7 @@ xpt_bus_register(struct cam_sim *sim, device_t parent, u_int32_t bus)
new_bus->flags = 0;
new_bus->refcount = 1; /* Held until a bus_deregister event */
new_bus->generation = 0;
+
mtx_lock(&xsoftc.xpt_topo_lock);
old_bus = TAILQ_FIRST(&xsoftc.xpt_busses);
while (old_bus != NULL
@@ -4358,18 +3776,46 @@ xpt_bus_register(struct cam_sim *sim, device_t parent, u_int32_t bus)
xsoftc.bus_generation++;
mtx_unlock(&xsoftc.xpt_topo_lock);
+ /*
+ * Set a default transport so that a PATH_INQ can be issued to
+ * the SIM. This will then allow for probing and attaching of
+ * a more appropriate transport.
+ */
+ new_bus->xport = &xport_default;
+
+ bzero(&path, sizeof(path));
+ status = xpt_compile_path(&path, /*periph*/NULL, sim->path_id,
+ CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD);
+ if (status != CAM_REQ_CMP)
+ printf("xpt_compile_path returned %d\n", status);
+
+ xpt_setup_ccb(&cpi.ccb_h, &path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ if (cpi.ccb_h.status == CAM_REQ_CMP) {
+ switch (cpi.transport) {
+ case XPORT_SPI:
+ case XPORT_SAS:
+ case XPORT_FC:
+ case XPORT_USB:
+ new_bus->xport = scsi_get_xport();
+ break;
+ case XPORT_ATA:
+ case XPORT_SATA:
+ new_bus->xport = ata_get_xport();
+ break;
+ default:
+ new_bus->xport = &xport_default;
+ break;
+ }
+ }
+
/* Notify interested parties */
if (sim->path_id != CAM_XPT_PATH_ID) {
- struct cam_path path;
-
- xpt_compile_path(&path, /*periph*/NULL, sim->path_id,
- CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD);
- xpt_setup_ccb(&cpi.ccb_h, &path, /*priority*/1);
- cpi.ccb_h.func_code = XPT_PATH_INQ;
- xpt_action((union ccb *)&cpi);
xpt_async(AC_PATH_REGISTERED, &path, &cpi);
- xpt_release_path(&path);
}
+ xpt_release_path(&path);
return (CAM_SUCCESS);
}
@@ -4521,8 +3967,9 @@ xpt_async(u_int32_t async_code, struct cam_path *path, void *async_arg)
&& device->lun_id != CAM_LUN_WILDCARD)
continue;
- xpt_dev_async(async_code, bus, target,
- device, async_arg);
+ (*(bus->xport->async))(async_code, bus,
+ target, device,
+ async_arg);
xpt_async_bcast(&device->asyncs, async_code,
path, async_arg);
@@ -4562,68 +4009,12 @@ xpt_async_bcast(struct async_list *async_head,
}
}
-/*
- * Handle any per-device event notifications that require action by the XPT.
- */
static void
-xpt_dev_async(u_int32_t async_code, struct cam_eb *bus, struct cam_et *target,
- struct cam_ed *device, void *async_arg)
+xpt_dev_async_default(u_int32_t async_code, struct cam_eb *bus,
+ struct cam_et *target, struct cam_ed *device,
+ void *async_arg)
{
- cam_status status;
- struct cam_path newpath;
-
- /*
- * We only need to handle events for real devices.
- */
- if (target->target_id == CAM_TARGET_WILDCARD
- || device->lun_id == CAM_LUN_WILDCARD)
- return;
-
- /*
- * We need our own path with wildcards expanded to
- * handle certain types of events.
- */
- if ((async_code == AC_SENT_BDR)
- || (async_code == AC_BUS_RESET)
- || (async_code == AC_INQ_CHANGED))
- status = xpt_compile_path(&newpath, NULL,
- bus->path_id,
- target->target_id,
- device->lun_id);
- else
- status = CAM_REQ_CMP_ERR;
-
- if (status == CAM_REQ_CMP) {
-
- /*
- * Allow transfer negotiation to occur in a
- * tag free environment.
- */
- if (async_code == AC_SENT_BDR
- || async_code == AC_BUS_RESET)
- xpt_toggle_tags(&newpath);
-
- if (async_code == AC_INQ_CHANGED) {
- /*
- * We've sent a start unit command, or
- * something similar to a device that
- * may have caused its inquiry data to
- * change. So we re-scan the device to
- * refresh the inquiry data for it.
- */
- xpt_scan_lun(newpath.periph, &newpath,
- CAM_EXPECT_INQ_CHANGE, NULL);
- }
- xpt_release_path(&newpath);
- } else if (async_code == AC_LOST_DEVICE) {
- device->flags |= CAM_DEV_UNCONFIGURED;
- } else if (async_code == AC_TRANSFER_NEG) {
- struct ccb_trans_settings *settings;
-
- settings = (struct ccb_trans_settings *)async_arg;
- xpt_set_transfer_settings(settings, device,
- /*async_update*/TRUE);
- }
+ printf("xpt_dev_async called\n");
}
u_int32_t
@@ -4938,9 +4329,34 @@ xpt_release_target(struct cam_eb *bus, struct cam_et *target)
}
static struct cam_ed *
+xpt_alloc_device_default(struct cam_eb *bus, struct cam_et *target,
+ lun_id_t lun_id)
+{
+ struct cam_ed *device, *cur_device;
+
+ device = xpt_alloc_device(bus, target, lun_id);
+ if (device == NULL)
+ return (NULL);
+
+ device->mintags = 1;
+ device->maxtags = 1;
+ bus->sim->max_ccbs = device->ccbq.devq_openings;
+ cur_device = TAILQ_FIRST(&target->ed_entries);
+ while (cur_device != NULL && cur_device->lun_id < lun_id)
+ cur_device = TAILQ_NEXT(cur_device, links);
+ if (cur_device != NULL) {
+ TAILQ_INSERT_BEFORE(cur_device, device, links);
+ } else {
+ TAILQ_INSERT_TAIL(&target->ed_entries, device, links);
+ }
+ target->generation++;
+
+ return (device);
+}
+
+struct cam_ed *
xpt_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
{
- struct cam_path path;
struct cam_ed *device;
struct cam_devq *devq;
cam_status status;
@@ -4957,8 +4373,6 @@ xpt_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
}
if (device != NULL) {
- struct cam_ed *cur_device;
-
cam_init_pinfo(&device->alloc_ccb_entry.pinfo);
device->alloc_ccb_entry.device = device;
cam_init_pinfo(&device->send_ccb_entry.pinfo);
@@ -4981,16 +4395,6 @@ xpt_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
SLIST_INIT(&device->periphs);
device->generation = 0;
device->owner = NULL;
- /*
- * Take the default quirk entry until we have inquiry
- * data and can determine a better quirk to use.
- */
- device->quirk = &xpt_quirk_table[xpt_quirk_table_size - 1];
- bzero(&device->inq_data, sizeof(device->inq_data));
- device->inq_flags = 0;
- device->queue_flags = 0;
- device->serial_num = NULL;
- device->serial_num_len = 0;
device->qfrozen_cnt = 0;
device->flags = CAM_DEV_UNCONFIGURED;
device->tag_delay_count = 0;
@@ -5007,30 +4411,6 @@ xpt_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
*/
target->refcount++;
- /*
- * XXX should be limited by number of CCBs this bus can
- * do.
- */
- bus->sim->max_ccbs += device->ccbq.devq_openings;
- /* Insertion sort into our target's device list */
- cur_device = TAILQ_FIRST(&target->ed_entries);
- while (cur_device != NULL && cur_device->lun_id < lun_id)
- cur_device = TAILQ_NEXT(cur_device, links);
- if (cur_device != NULL) {
- TAILQ_INSERT_BEFORE(cur_device, device, links);
- } else {
- TAILQ_INSERT_TAIL(&target->ed_entries, device, links);
- }
- target->generation++;
- if (lun_id != CAM_LUN_WILDCARD) {
- xpt_compile_path(&path,
- NULL,
- bus->path_id,
- target->target_id,
- lun_id);
- xpt_devise_transport(&path);
- xpt_release_path(&path);
- }
}
return (device);
}
@@ -5064,7 +4444,7 @@ xpt_release_device(struct cam_eb *bus, struct cam_et *target,
}
}
-static u_int32_t
+u_int32_t
xpt_dev_ccbq_resize(struct cam_path *path, int newopenings)
{
int diff;
@@ -5136,1715 +4516,6 @@ xpt_find_device(struct cam_et *target, lun_id_t lun_id)
return (device);
}
-typedef struct {
- union ccb *request_ccb;
- struct ccb_pathinq *cpi;
- int counter;
-} xpt_scan_bus_info;
-
-/*
- * To start a scan, request_ccb is an XPT_SCAN_BUS ccb.
- * As the scan progresses, xpt_scan_bus is used as the
- * callback on completion function.
- */
-static void
-xpt_scan_bus(struct cam_periph *periph, union ccb *request_ccb)
-{
- CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
- ("xpt_scan_bus\n"));
- switch (request_ccb->ccb_h.func_code) {
- case XPT_SCAN_BUS:
- {
- xpt_scan_bus_info *scan_info;
- union ccb *work_ccb;
- struct cam_path *path;
- u_int i;
- u_int max_target;
- u_int initiator_id;
-
- /* Find out the characteristics of the bus */
- work_ccb = xpt_alloc_ccb_nowait();
- if (work_ccb == NULL) {
- request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
- xpt_done(request_ccb);
- return;
- }
- xpt_setup_ccb(&work_ccb->ccb_h, request_ccb->ccb_h.path,
- request_ccb->ccb_h.pinfo.priority);
- work_ccb->ccb_h.func_code = XPT_PATH_INQ;
- xpt_action(work_ccb);
- if (work_ccb->ccb_h.status != CAM_REQ_CMP) {
- request_ccb->ccb_h.status = work_ccb->ccb_h.status;
- xpt_free_ccb(work_ccb);
- xpt_done(request_ccb);
- return;
- }
-
- if ((work_ccb->cpi.hba_misc & PIM_NOINITIATOR) != 0) {
- /*
- * Can't scan the bus on an adapter that
- * cannot perform the initiator role.
- */
- request_ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_free_ccb(work_ccb);
- xpt_done(request_ccb);
- return;
- }
-
- /* Save some state for use while we probe for devices */
- scan_info = (xpt_scan_bus_info *)
- malloc(sizeof(xpt_scan_bus_info), M_CAMXPT, M_NOWAIT);
- if (scan_info == NULL) {
- request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
- xpt_done(request_ccb);
- return;
- }
- scan_info->request_ccb = request_ccb;
- scan_info->cpi = &work_ccb->cpi;
-
- /* Cache on our stack so we can work asynchronously */
- max_target = scan_info->cpi->max_target;
- initiator_id = scan_info->cpi->initiator_id;
-
-
- /*
- * We can scan all targets in parallel, or do it sequentially.
- */
- if (scan_info->cpi->hba_misc & PIM_SEQSCAN) {
- max_target = 0;
- scan_info->counter = 0;
- } else {
- scan_info->counter = scan_info->cpi->max_target + 1;
- if (scan_info->cpi->initiator_id < scan_info->counter) {
- scan_info->counter--;
- }
- }
-
- for (i = 0; i <= max_target; i++) {
- cam_status status;
- if (i == initiator_id)
- continue;
-
- status = xpt_create_path(&path, xpt_periph,
- request_ccb->ccb_h.path_id,
- i, 0);
- if (status != CAM_REQ_CMP) {
- printf("xpt_scan_bus: xpt_create_path failed"
- " with status %#x, bus scan halted\n",
- status);
- free(scan_info, M_CAMXPT);
- request_ccb->ccb_h.status = status;
- xpt_free_ccb(work_ccb);
- xpt_done(request_ccb);
- break;
- }
- work_ccb = xpt_alloc_ccb_nowait();
- if (work_ccb == NULL) {
- free(scan_info, M_CAMXPT);
- xpt_free_path(path);
- request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
- xpt_done(request_ccb);
- break;
- }
- xpt_setup_ccb(&work_ccb->ccb_h, path,
- request_ccb->ccb_h.pinfo.priority);
- work_ccb->ccb_h.func_code = XPT_SCAN_LUN;
- work_ccb->ccb_h.cbfcnp = xpt_scan_bus;
- work_ccb->ccb_h.ppriv_ptr0 = scan_info;
- work_ccb->crcn.flags = request_ccb->crcn.flags;
- xpt_action(work_ccb);
- }
- break;
- }
- case XPT_SCAN_LUN:
- {
- cam_status status;
- struct cam_path *path;
- xpt_scan_bus_info *scan_info;
- path_id_t path_id;
- target_id_t target_id;
- lun_id_t lun_id;
-
- /* Reuse the same CCB to query if a device was really found */
- scan_info = (xpt_scan_bus_info *)request_ccb->ccb_h.ppriv_ptr0;
- xpt_setup_ccb(&request_ccb->ccb_h, request_ccb->ccb_h.path,
- request_ccb->ccb_h.pinfo.priority);
- request_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
-
- path_id = request_ccb->ccb_h.path_id;
- target_id = request_ccb->ccb_h.target_id;
- lun_id = request_ccb->ccb_h.target_lun;
- xpt_action(request_ccb);
-
- if (request_ccb->ccb_h.status != CAM_REQ_CMP) {
- struct cam_ed *device;
- struct cam_et *target;
- int phl;
-
- /*
- * If we already probed lun 0 successfully, or
- * we have additional configured luns on this
- * target that might have "gone away", go onto
- * the next lun.
- */
- target = request_ccb->ccb_h.path->target;
- /*
- * We may touch devices that we don't
- * hold references too, so ensure they
- * don't disappear out from under us.
- * The target above is referenced by the
- * path in the request ccb.
- */
- phl = 0;
- device = TAILQ_FIRST(&target->ed_entries);
- if (device != NULL) {
- phl = CAN_SRCH_HI_SPARSE(device);
- if (device->lun_id == 0)
- device = TAILQ_NEXT(device, links);
- }
- if ((lun_id != 0) || (device != NULL)) {
- if (lun_id < (CAM_SCSI2_MAXLUN-1) || phl)
- lun_id++;
- }
- } else {
- struct cam_ed *device;
-
- device = request_ccb->ccb_h.path->device;
-
- if ((device->quirk->quirks & CAM_QUIRK_NOLUNS) == 0) {
- /* Try the next lun */
- if (lun_id < (CAM_SCSI2_MAXLUN-1)
- || CAN_SRCH_HI_DENSE(device))
- lun_id++;
- }
- }
-
- /*
- * Free the current request path- we're done with it.
- */
- xpt_free_path(request_ccb->ccb_h.path);
-
- /*
- * Check to see if we scan any further luns.
- */
- if (lun_id == request_ccb->ccb_h.target_lun
- || lun_id > scan_info->cpi->max_lun) {
- int done;
-
- hop_again:
- done = 0;
- if (scan_info->cpi->hba_misc & PIM_SEQSCAN) {
- scan_info->counter++;
- if (scan_info->counter ==
- scan_info->cpi->initiator_id) {
- scan_info->counter++;
- }
- if (scan_info->counter >=
- scan_info->cpi->max_target+1) {
- done = 1;
- }
- } else {
- scan_info->counter--;
- if (scan_info->counter == 0) {
- done = 1;
- }
- }
- if (done) {
- xpt_free_ccb(request_ccb);
- xpt_free_ccb((union ccb *)scan_info->cpi);
- request_ccb = scan_info->request_ccb;
- free(scan_info, M_CAMXPT);
- request_ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_done(request_ccb);
- break;
- }
-
- if ((scan_info->cpi->hba_misc & PIM_SEQSCAN) == 0) {
- break;
- }
- status = xpt_create_path(&path, xpt_periph,
- scan_info->request_ccb->ccb_h.path_id,
- scan_info->counter, 0);
- if (status != CAM_REQ_CMP) {
- printf("xpt_scan_bus: xpt_create_path failed"
- " with status %#x, bus scan halted\n",
- status);
- xpt_free_ccb(request_ccb);
- xpt_free_ccb((union ccb *)scan_info->cpi);
- request_ccb = scan_info->request_ccb;
- free(scan_info, M_CAMXPT);
- request_ccb->ccb_h.status = status;
- xpt_done(request_ccb);
- break;
- }
- xpt_setup_ccb(&request_ccb->ccb_h, path,
- request_ccb->ccb_h.pinfo.priority);
- request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
- request_ccb->ccb_h.cbfcnp = xpt_scan_bus;
- request_ccb->ccb_h.ppriv_ptr0 = scan_info;
- request_ccb->crcn.flags =
- scan_info->request_ccb->crcn.flags;
- } else {
- status = xpt_create_path(&path, xpt_periph,
- path_id, target_id, lun_id);
- if (status != CAM_REQ_CMP) {
- printf("xpt_scan_bus: xpt_create_path failed "
- "with status %#x, halting LUN scan\n",
- status);
- goto hop_again;
- }
- xpt_setup_ccb(&request_ccb->ccb_h, path,
- request_ccb->ccb_h.pinfo.priority);
- request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
- request_ccb->ccb_h.cbfcnp = xpt_scan_bus;
- request_ccb->ccb_h.ppriv_ptr0 = scan_info;
- request_ccb->crcn.flags =
- scan_info->request_ccb->crcn.flags;
- }
- xpt_action(request_ccb);
- break;
- }
- default:
- break;
- }
-}
-
-typedef enum {
- PROBE_TUR,
- PROBE_INQUIRY, /* this counts as DV0 for Basic Domain Validation */
- PROBE_FULL_INQUIRY,
- PROBE_MODE_SENSE,
- PROBE_SERIAL_NUM_0,
- PROBE_SERIAL_NUM_1,
- PROBE_TUR_FOR_NEGOTIATION,
- PROBE_INQUIRY_BASIC_DV1,
- PROBE_INQUIRY_BASIC_DV2,
- PROBE_DV_EXIT,
- PROBE_INVALID
-} probe_action;
-
-static char *probe_action_text[] = {
- "PROBE_TUR",
- "PROBE_INQUIRY",
- "PROBE_FULL_INQUIRY",
- "PROBE_MODE_SENSE",
- "PROBE_SERIAL_NUM_0",
- "PROBE_SERIAL_NUM_1",
- "PROBE_TUR_FOR_NEGOTIATION",
- "PROBE_INQUIRY_BASIC_DV1",
- "PROBE_INQUIRY_BASIC_DV2",
- "PROBE_DV_EXIT",
- "PROBE_INVALID"
-};
-
-#define PROBE_SET_ACTION(softc, newaction) \
-do { \
- char **text; \
- text = probe_action_text; \
- CAM_DEBUG((softc)->periph->path, CAM_DEBUG_INFO, \
- ("Probe %s to %s\n", text[(softc)->action], \
- text[(newaction)])); \
- (softc)->action = (newaction); \
-} while(0)
-
-typedef enum {
- PROBE_INQUIRY_CKSUM = 0x01,
- PROBE_SERIAL_CKSUM = 0x02,
- PROBE_NO_ANNOUNCE = 0x04
-} probe_flags;
-
-typedef struct {
- TAILQ_HEAD(, ccb_hdr) request_ccbs;
- probe_action action;
- union ccb saved_ccb;
- probe_flags flags;
- MD5_CTX context;
- u_int8_t digest[16];
- struct cam_periph *periph;
-} probe_softc;
-
-static void
-xpt_scan_lun(struct cam_periph *periph, struct cam_path *path,
- cam_flags flags, union ccb *request_ccb)
-{
- struct ccb_pathinq cpi;
- cam_status status;
- struct cam_path *new_path;
- struct cam_periph *old_periph;
-
- CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
- ("xpt_scan_lun\n"));
-
- xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
- cpi.ccb_h.func_code = XPT_PATH_INQ;
- xpt_action((union ccb *)&cpi);
-
- if (cpi.ccb_h.status != CAM_REQ_CMP) {
- if (request_ccb != NULL) {
- request_ccb->ccb_h.status = cpi.ccb_h.status;
- xpt_done(request_ccb);
- }
- return;
- }
-
- if ((cpi.hba_misc & PIM_NOINITIATOR) != 0) {
- /*
- * Can't scan the bus on an adapter that
- * cannot perform the initiator role.
- */
- if (request_ccb != NULL) {
- request_ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_done(request_ccb);
- }
- return;
- }
-
- if (request_ccb == NULL) {
- request_ccb = malloc(sizeof(union ccb), M_CAMXPT, M_NOWAIT);
- if (request_ccb == NULL) {
- xpt_print(path, "xpt_scan_lun: can't allocate CCB, "
- "can't continue\n");
- return;
- }
- new_path = malloc(sizeof(*new_path), M_CAMXPT, M_NOWAIT);
- if (new_path == NULL) {
- xpt_print(path, "xpt_scan_lun: can't allocate path, "
- "can't continue\n");
- free(request_ccb, M_CAMXPT);
- return;
- }
- status = xpt_compile_path(new_path, xpt_periph,
- path->bus->path_id,
- path->target->target_id,
- path->device->lun_id);
-
- if (status != CAM_REQ_CMP) {
- xpt_print(path, "xpt_scan_lun: can't compile path, "
- "can't continue\n");
- free(request_ccb, M_CAMXPT);
- free(new_path, M_CAMXPT);
- return;
- }
- xpt_setup_ccb(&request_ccb->ccb_h, new_path, /*priority*/ 1);
- request_ccb->ccb_h.cbfcnp = xptscandone;
- request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
- request_ccb->crcn.flags = flags;
- }
-
- if ((old_periph = cam_periph_find(path, "probe")) != NULL) {
- probe_softc *softc;
-
- softc = (probe_softc *)old_periph->softc;
- TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
- periph_links.tqe);
- } else {
- status = cam_periph_alloc(proberegister, NULL, probecleanup,
- probestart, "probe",
- CAM_PERIPH_BIO,
- request_ccb->ccb_h.path, NULL, 0,
- request_ccb);
-
- if (status != CAM_REQ_CMP) {
- xpt_print(path, "xpt_scan_lun: cam_alloc_periph "
- "returned an error, can't continue probe\n");
- request_ccb->ccb_h.status = status;
- xpt_done(request_ccb);
- }
- }
-}
-
-static void
-xptscandone(struct cam_periph *periph, union ccb *done_ccb)
-{
- xpt_release_path(done_ccb->ccb_h.path);
- free(done_ccb->ccb_h.path, M_CAMXPT);
- free(done_ccb, M_CAMXPT);
-}
-
-static cam_status
-proberegister(struct cam_periph *periph, void *arg)
-{
- union ccb *request_ccb; /* CCB representing the probe request */
- cam_status status;
- probe_softc *softc;
-
- request_ccb = (union ccb *)arg;
- if (periph == NULL) {
- printf("proberegister: periph was NULL!!\n");
- return(CAM_REQ_CMP_ERR);
- }
-
- if (request_ccb == NULL) {
- printf("proberegister: no probe CCB, "
- "can't register device\n");
- return(CAM_REQ_CMP_ERR);
- }
-
- softc = (probe_softc *)malloc(sizeof(*softc), M_CAMXPT, M_NOWAIT);
-
- if (softc == NULL) {
- printf("proberegister: Unable to probe new device. "
- "Unable to allocate softc\n");
- return(CAM_REQ_CMP_ERR);
- }
- TAILQ_INIT(&softc->request_ccbs);
- TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
- periph_links.tqe);
- softc->flags = 0;
- periph->softc = softc;
- softc->periph = periph;
- softc->action = PROBE_INVALID;
- status = cam_periph_acquire(periph);
- if (status != CAM_REQ_CMP) {
- return (status);
- }
-
-
- /*
- * Ensure we've waited at least a bus settle
- * delay before attempting to probe the device.
- * For HBAs that don't do bus resets, this won't make a difference.
- */
- cam_periph_freeze_after_event(periph, &periph->path->bus->last_reset,
- scsi_delay);
- probeschedule(periph);
- return(CAM_REQ_CMP);
-}
-
-static void
-probeschedule(struct cam_periph *periph)
-{
- struct ccb_pathinq cpi;
- union ccb *ccb;
- probe_softc *softc;
-
- softc = (probe_softc *)periph->softc;
- ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
-
- xpt_setup_ccb(&cpi.ccb_h, periph->path, /*priority*/1);
- cpi.ccb_h.func_code = XPT_PATH_INQ;
- xpt_action((union ccb *)&cpi);
-
- /*
- * If a device has gone away and another device, or the same one,
- * is back in the same place, it should have a unit attention
- * condition pending. It will not report the unit attention in
- * response to an inquiry, which may leave invalid transfer
- * negotiations in effect. The TUR will reveal the unit attention
- * condition. Only send the TUR for lun 0, since some devices
- * will get confused by commands other than inquiry to non-existent
- * luns. If you think a device has gone away start your scan from
- * lun 0. This will insure that any bogus transfer settings are
- * invalidated.
- *
- * If we haven't seen the device before and the controller supports
- * some kind of transfer negotiation, negotiate with the first
- * sent command if no bus reset was performed at startup. This
- * ensures that the device is not confused by transfer negotiation
- * settings left over by loader or BIOS action.
- */
- if (((ccb->ccb_h.path->device->flags & CAM_DEV_UNCONFIGURED) == 0)
- && (ccb->ccb_h.target_lun == 0)) {
- PROBE_SET_ACTION(softc, PROBE_TUR);
- } else if ((cpi.hba_inquiry & (PI_WIDE_32|PI_WIDE_16|PI_SDTR_ABLE)) != 0
- && (cpi.hba_misc & PIM_NOBUSRESET) != 0) {
- proberequestdefaultnegotiation(periph);
- PROBE_SET_ACTION(softc, PROBE_INQUIRY);
- } else {
- PROBE_SET_ACTION(softc, PROBE_INQUIRY);
- }
-
- if (ccb->crcn.flags & CAM_EXPECT_INQ_CHANGE)
- softc->flags |= PROBE_NO_ANNOUNCE;
- else
- softc->flags &= ~PROBE_NO_ANNOUNCE;
-
- xpt_schedule(periph, ccb->ccb_h.pinfo.priority);
-}
-
-static void
-probestart(struct cam_periph *periph, union ccb *start_ccb)
-{
- /* Probe the device that our peripheral driver points to */
- struct ccb_scsiio *csio;
- probe_softc *softc;
-
- CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probestart\n"));
-
- softc = (probe_softc *)periph->softc;
- csio = &start_ccb->csio;
-
- switch (softc->action) {
- case PROBE_TUR:
- case PROBE_TUR_FOR_NEGOTIATION:
- case PROBE_DV_EXIT:
- {
- scsi_test_unit_ready(csio,
- /*retries*/10,
- probedone,
- MSG_SIMPLE_Q_TAG,
- SSD_FULL_SIZE,
- /*timeout*/60000);
- break;
- }
- case PROBE_INQUIRY:
- case PROBE_FULL_INQUIRY:
- case PROBE_INQUIRY_BASIC_DV1:
- case PROBE_INQUIRY_BASIC_DV2:
- {
- u_int inquiry_len;
- struct scsi_inquiry_data *inq_buf;
-
- inq_buf = &periph->path->device->inq_data;
-
- /*
- * If the device is currently configured, we calculate an
- * MD5 checksum of the inquiry data, and if the serial number
- * length is greater than 0, add the serial number data
- * into the checksum as well. Once the inquiry and the
- * serial number check finish, we attempt to figure out
- * whether we still have the same device.
- */
- if ((periph->path->device->flags & CAM_DEV_UNCONFIGURED) == 0) {
-
- MD5Init(&softc->context);
- MD5Update(&softc->context, (unsigned char *)inq_buf,
- sizeof(struct scsi_inquiry_data));
- softc->flags |= PROBE_INQUIRY_CKSUM;
- if (periph->path->device->serial_num_len > 0) {
- MD5Update(&softc->context,
- periph->path->device->serial_num,
- periph->path->device->serial_num_len);
- softc->flags |= PROBE_SERIAL_CKSUM;
- }
- MD5Final(softc->digest, &softc->context);
- }
-
- if (softc->action == PROBE_INQUIRY)
- inquiry_len = SHORT_INQUIRY_LENGTH;
- else
- inquiry_len = SID_ADDITIONAL_LENGTH(inq_buf);
-
- /*
- * Some parallel SCSI devices fail to send an
- * ignore wide residue message when dealing with
- * odd length inquiry requests. Round up to be
- * safe.
- */
- inquiry_len = roundup2(inquiry_len, 2);
-
- if (softc->action == PROBE_INQUIRY_BASIC_DV1
- || softc->action == PROBE_INQUIRY_BASIC_DV2) {
- inq_buf = malloc(inquiry_len, M_CAMXPT, M_NOWAIT);
- }
- if (inq_buf == NULL) {
- xpt_print(periph->path, "malloc failure- skipping Basic"
- "Domain Validation\n");
- PROBE_SET_ACTION(softc, PROBE_DV_EXIT);
- scsi_test_unit_ready(csio,
- /*retries*/4,
- probedone,
- MSG_SIMPLE_Q_TAG,
- SSD_FULL_SIZE,
- /*timeout*/60000);
- break;
- }
- scsi_inquiry(csio,
- /*retries*/4,
- probedone,
- MSG_SIMPLE_Q_TAG,
- (u_int8_t *)inq_buf,
- inquiry_len,
- /*evpd*/FALSE,
- /*page_code*/0,
- SSD_MIN_SIZE,
- /*timeout*/60 * 1000);
- break;
- }
- case PROBE_MODE_SENSE:
- {
- void *mode_buf;
- int mode_buf_len;
-
- mode_buf_len = sizeof(struct scsi_mode_header_6)
- + sizeof(struct scsi_mode_blk_desc)
- + sizeof(struct scsi_control_page);
- mode_buf = malloc(mode_buf_len, M_CAMXPT, M_NOWAIT);
- if (mode_buf != NULL) {
- scsi_mode_sense(csio,
- /*retries*/4,
- probedone,
- MSG_SIMPLE_Q_TAG,
- /*dbd*/FALSE,
- SMS_PAGE_CTRL_CURRENT,
- SMS_CONTROL_MODE_PAGE,
- mode_buf,
- mode_buf_len,
- SSD_FULL_SIZE,
- /*timeout*/60000);
- break;
- }
- xpt_print(periph->path, "Unable to mode sense control page - "
- "malloc failure\n");
- PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
- }
- /* FALLTHROUGH */
- case PROBE_SERIAL_NUM_0:
- {
- struct scsi_vpd_supported_page_list *vpd_list = NULL;
- struct cam_ed *device;
-
- device = periph->path->device;
- if ((device->quirk->quirks & CAM_QUIRK_NOSERIAL) == 0) {
- vpd_list = malloc(sizeof(*vpd_list), M_CAMXPT,
- M_NOWAIT | M_ZERO);
- }
-
- if (vpd_list != NULL) {
- scsi_inquiry(csio,
- /*retries*/4,
- probedone,
- MSG_SIMPLE_Q_TAG,
- (u_int8_t *)vpd_list,
- sizeof(*vpd_list),
- /*evpd*/TRUE,
- SVPD_SUPPORTED_PAGE_LIST,
- SSD_MIN_SIZE,
- /*timeout*/60 * 1000);
- break;
- }
- /*
- * We'll have to do without, let our probedone
- * routine finish up for us.
- */
- start_ccb->csio.data_ptr = NULL;
- probedone(periph, start_ccb);
- return;
- }
- case PROBE_SERIAL_NUM_1:
- {
- struct scsi_vpd_unit_serial_number *serial_buf;
- struct cam_ed* device;
-
- serial_buf = NULL;
- device = periph->path->device;
- device->serial_num = NULL;
- device->serial_num_len = 0;
-
- serial_buf = (struct scsi_vpd_unit_serial_number *)
- malloc(sizeof(*serial_buf), M_CAMXPT, M_NOWAIT|M_ZERO);
-
- if (serial_buf != NULL) {
- scsi_inquiry(csio,
- /*retries*/4,
- probedone,
- MSG_SIMPLE_Q_TAG,
- (u_int8_t *)serial_buf,
- sizeof(*serial_buf),
- /*evpd*/TRUE,
- SVPD_UNIT_SERIAL_NUMBER,
- SSD_MIN_SIZE,
- /*timeout*/60 * 1000);
- break;
- }
- /*
- * We'll have to do without, let our probedone
- * routine finish up for us.
- */
- start_ccb->csio.data_ptr = NULL;
- probedone(periph, start_ccb);
- return;
- }
- case PROBE_INVALID:
- CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_INFO,
- ("probestart: invalid action state\n"));
- default:
- break;
- }
- xpt_action(start_ccb);
-}
-
-static void
-proberequestdefaultnegotiation(struct cam_periph *periph)
-{
- struct ccb_trans_settings cts;
-
- xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
- cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
- cts.type = CTS_TYPE_USER_SETTINGS;
- xpt_action((union ccb *)&cts);
- if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
- return;
- }
- cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
- cts.type = CTS_TYPE_CURRENT_SETTINGS;
- xpt_action((union ccb *)&cts);
-}
-
-/*
- * Backoff Negotiation Code- only pertinent for SPI devices.
- */
-static int
-proberequestbackoff(struct cam_periph *periph, struct cam_ed *device)
-{
- struct ccb_trans_settings cts;
- struct ccb_trans_settings_spi *spi;
-
- memset(&cts, 0, sizeof (cts));
- xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
- cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
- cts.type = CTS_TYPE_CURRENT_SETTINGS;
- xpt_action((union ccb *)&cts);
- if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
- if (bootverbose) {
- xpt_print(periph->path,
- "failed to get current device settings\n");
- }
- return (0);
- }
- if (cts.transport != XPORT_SPI) {
- if (bootverbose) {
- xpt_print(periph->path, "not SPI transport\n");
- }
- return (0);
- }
- spi = &cts.xport_specific.spi;
-
- /*
- * We cannot renegotiate sync rate if we don't have one.
- */
- if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0) {
- if (bootverbose) {
- xpt_print(periph->path, "no sync rate known\n");
- }
- return (0);
- }
-
- /*
- * We'll assert that we don't have to touch PPR options- the
- * SIM will see what we do with period and offset and adjust
- * the PPR options as appropriate.
- */
-
- /*
- * A sync rate with unknown or zero offset is nonsensical.
- * A sync period of zero means Async.
- */
- if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0
- || spi->sync_offset == 0 || spi->sync_period == 0) {
- if (bootverbose) {
- xpt_print(periph->path, "no sync rate available\n");
- }
- return (0);
- }
-
- if (device->flags & CAM_DEV_DV_HIT_BOTTOM) {
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("hit async: giving up on DV\n"));
- return (0);
- }
-
-
- /*
- * Jump sync_period up by one, but stop at 5MHz and fall back to Async.
- * We don't try to remember 'last' settings to see if the SIM actually
- * gets into the speed we want to set. We check on the SIM telling
- * us that a requested speed is bad, but otherwise don't try and
- * check the speed due to the asynchronous and handshake nature
- * of speed setting.
- */
- spi->valid = CTS_SPI_VALID_SYNC_RATE | CTS_SPI_VALID_SYNC_OFFSET;
- for (;;) {
- spi->sync_period++;
- if (spi->sync_period >= 0xf) {
- spi->sync_period = 0;
- spi->sync_offset = 0;
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("setting to async for DV\n"));
- /*
- * Once we hit async, we don't want to try
- * any more settings.
- */
- device->flags |= CAM_DEV_DV_HIT_BOTTOM;
- } else if (bootverbose) {
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("DV: period 0x%x\n", spi->sync_period));
- printf("setting period to 0x%x\n", spi->sync_period);
- }
- cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
- cts.type = CTS_TYPE_CURRENT_SETTINGS;
- xpt_action((union ccb *)&cts);
- if ((cts.ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
- break;
- }
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("DV: failed to set period 0x%x\n", spi->sync_period));
- if (spi->sync_period == 0) {
- return (0);
- }
- }
- return (1);
-}
-
-static void
-probedone(struct cam_periph *periph, union ccb *done_ccb)
-{
- probe_softc *softc;
- struct cam_path *path;
- u_int32_t priority;
-
- CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probedone\n"));
-
- softc = (probe_softc *)periph->softc;
- path = done_ccb->ccb_h.path;
- priority = done_ccb->ccb_h.pinfo.priority;
-
- switch (softc->action) {
- case PROBE_TUR:
- {
- if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
-
- if (cam_periph_error(done_ccb, 0,
- SF_NO_PRINT, NULL) == ERESTART)
- return;
- else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path,
- /*count*/1,
- /*run_queue*/TRUE);
- }
- PROBE_SET_ACTION(softc, PROBE_INQUIRY);
- xpt_release_ccb(done_ccb);
- xpt_schedule(periph, priority);
- return;
- }
- case PROBE_INQUIRY:
- case PROBE_FULL_INQUIRY:
- {
- if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
- struct scsi_inquiry_data *inq_buf;
- u_int8_t periph_qual;
-
- path->device->flags |= CAM_DEV_INQUIRY_DATA_VALID;
- inq_buf = &path->device->inq_data;
-
- periph_qual = SID_QUAL(inq_buf);
-
- switch(periph_qual) {
- case SID_QUAL_LU_CONNECTED:
- {
- u_int8_t len;
-
- /*
- * We conservatively request only
- * SHORT_INQUIRY_LEN bytes of inquiry
- * information during our first try
- * at sending an INQUIRY. If the device
- * has more information to give,
- * perform a second request specifying
- * the amount of information the device
- * is willing to give.
- */
- len = inq_buf->additional_length
- + offsetof(struct scsi_inquiry_data,
- additional_length) + 1;
- if (softc->action == PROBE_INQUIRY
- && len > SHORT_INQUIRY_LENGTH) {
- PROBE_SET_ACTION(softc, PROBE_FULL_INQUIRY);
- xpt_release_ccb(done_ccb);
- xpt_schedule(periph, priority);
- return;
- }
-
- xpt_find_quirk(path->device);
-
- xpt_devise_transport(path);
- if (INQ_DATA_TQ_ENABLED(inq_buf))
- PROBE_SET_ACTION(softc, PROBE_MODE_SENSE);
- else
- PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
-
- path->device->flags &= ~CAM_DEV_UNCONFIGURED;
-
- xpt_release_ccb(done_ccb);
- xpt_schedule(periph, priority);
- return;
- }
- default:
- break;
- }
- } else if (cam_periph_error(done_ccb, 0,
- done_ccb->ccb_h.target_lun > 0
- ? SF_RETRY_UA|SF_QUIET_IR
- : SF_RETRY_UA,
- &softc->saved_ccb) == ERESTART) {
- return;
- } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
- /*run_queue*/TRUE);
- }
- /*
- * If we get to this point, we got an error status back
- * from the inquiry and the error status doesn't require
- * automatically retrying the command. Therefore, the
- * inquiry failed. If we had inquiry information before
- * for this device, but this latest inquiry command failed,
- * the device has probably gone away. If this device isn't
- * already marked unconfigured, notify the peripheral
- * drivers that this device is no more.
- */
- if ((path->device->flags & CAM_DEV_UNCONFIGURED) == 0)
- /* Send the async notification. */
- xpt_async(AC_LOST_DEVICE, path, NULL);
-
- xpt_release_ccb(done_ccb);
- break;
- }
- case PROBE_MODE_SENSE:
- {
- struct ccb_scsiio *csio;
- struct scsi_mode_header_6 *mode_hdr;
-
- csio = &done_ccb->csio;
- mode_hdr = (struct scsi_mode_header_6 *)csio->data_ptr;
- if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
- struct scsi_control_page *page;
- u_int8_t *offset;
-
- offset = ((u_int8_t *)&mode_hdr[1])
- + mode_hdr->blk_desc_len;
- page = (struct scsi_control_page *)offset;
- path->device->queue_flags = page->queue_flags;
- } else if (cam_periph_error(done_ccb, 0,
- SF_RETRY_UA|SF_NO_PRINT,
- &softc->saved_ccb) == ERESTART) {
- return;
- } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path,
- /*count*/1, /*run_queue*/TRUE);
- }
- xpt_release_ccb(done_ccb);
- free(mode_hdr, M_CAMXPT);
- PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
- xpt_schedule(periph, priority);
- return;
- }
- case PROBE_SERIAL_NUM_0:
- {
- struct ccb_scsiio *csio;
- struct scsi_vpd_supported_page_list *page_list;
- int length, serialnum_supported, i;
-
- serialnum_supported = 0;
- csio = &done_ccb->csio;
- page_list =
- (struct scsi_vpd_supported_page_list *)csio->data_ptr;
-
- if (page_list == NULL) {
- /*
- * Don't process the command as it was never sent
- */
- } else if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP
- && (page_list->length > 0)) {
- length = min(page_list->length,
- SVPD_SUPPORTED_PAGES_SIZE);
- for (i = 0; i < length; i++) {
- if (page_list->list[i] ==
- SVPD_UNIT_SERIAL_NUMBER) {
- serialnum_supported = 1;
- break;
- }
- }
- } else if (cam_periph_error(done_ccb, 0,
- SF_RETRY_UA|SF_NO_PRINT,
- &softc->saved_ccb) == ERESTART) {
- return;
- } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
- /*run_queue*/TRUE);
- }
-
- if (page_list != NULL)
- free(page_list, M_DEVBUF);
-
- if (serialnum_supported) {
- xpt_release_ccb(done_ccb);
- PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_1);
- xpt_schedule(periph, priority);
- return;
- }
-
- csio->data_ptr = NULL;
- /* FALLTHROUGH */
- }
-
- case PROBE_SERIAL_NUM_1:
- {
- struct ccb_scsiio *csio;
- struct scsi_vpd_unit_serial_number *serial_buf;
- u_int32_t priority;
- int changed;
- int have_serialnum;
-
- changed = 1;
- have_serialnum = 0;
- csio = &done_ccb->csio;
- priority = done_ccb->ccb_h.pinfo.priority;
- serial_buf =
- (struct scsi_vpd_unit_serial_number *)csio->data_ptr;
-
- /* Clean up from previous instance of this device */
- if (path->device->serial_num != NULL) {
- free(path->device->serial_num, M_CAMXPT);
- path->device->serial_num = NULL;
- path->device->serial_num_len = 0;
- }
-
- if (serial_buf == NULL) {
- /*
- * Don't process the command as it was never sent
- */
- } else if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP
- && (serial_buf->length > 0)) {
-
- have_serialnum = 1;
- path->device->serial_num =
- (u_int8_t *)malloc((serial_buf->length + 1),
- M_CAMXPT, M_NOWAIT);
- if (path->device->serial_num != NULL) {
- bcopy(serial_buf->serial_num,
- path->device->serial_num,
- serial_buf->length);
- path->device->serial_num_len =
- serial_buf->length;
- path->device->serial_num[serial_buf->length]
- = '\0';
- }
- } else if (cam_periph_error(done_ccb, 0,
- SF_RETRY_UA|SF_NO_PRINT,
- &softc->saved_ccb) == ERESTART) {
- return;
- } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
- /*run_queue*/TRUE);
- }
-
- /*
- * Let's see if we have seen this device before.
- */
- if ((softc->flags & PROBE_INQUIRY_CKSUM) != 0) {
- MD5_CTX context;
- u_int8_t digest[16];
-
- MD5Init(&context);
-
- MD5Update(&context,
- (unsigned char *)&path->device->inq_data,
- sizeof(struct scsi_inquiry_data));
-
- if (have_serialnum)
- MD5Update(&context, serial_buf->serial_num,
- serial_buf->length);
-
- MD5Final(digest, &context);
- if (bcmp(softc->digest, digest, 16) == 0)
- changed = 0;
-
- /*
- * XXX Do we need to do a TUR in order to ensure
- * that the device really hasn't changed???
- */
- if ((changed != 0)
- && ((softc->flags & PROBE_NO_ANNOUNCE) == 0))
- xpt_async(AC_LOST_DEVICE, path, NULL);
- }
- if (serial_buf != NULL)
- free(serial_buf, M_CAMXPT);
-
- if (changed != 0) {
- /*
- * Now that we have all the necessary
- * information to safely perform transfer
- * negotiations... Controllers don't perform
- * any negotiation or tagged queuing until
- * after the first XPT_SET_TRAN_SETTINGS ccb is
- * received. So, on a new device, just retrieve
- * the user settings, and set them as the current
- * settings to set the device up.
- */
- proberequestdefaultnegotiation(periph);
- xpt_release_ccb(done_ccb);
-
- /*
- * Perform a TUR to allow the controller to
- * perform any necessary transfer negotiation.
- */
- PROBE_SET_ACTION(softc, PROBE_TUR_FOR_NEGOTIATION);
- xpt_schedule(periph, priority);
- return;
- }
- xpt_release_ccb(done_ccb);
- break;
- }
- case PROBE_TUR_FOR_NEGOTIATION:
- if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
- DELAY(500000);
- if (cam_periph_error(done_ccb, 0, SF_RETRY_UA,
- NULL) == ERESTART)
- return;
- }
- /* FALLTHROUGH */
- case PROBE_DV_EXIT:
- if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
- /*run_queue*/TRUE);
- }
- /*
- * Do Domain Validation for lun 0 on devices that claim
- * to support Synchronous Transfer modes.
- */
- if (softc->action == PROBE_TUR_FOR_NEGOTIATION
- && done_ccb->ccb_h.target_lun == 0
- && (path->device->inq_data.flags & SID_Sync) != 0
- && (path->device->flags & CAM_DEV_IN_DV) == 0) {
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("Begin Domain Validation\n"));
- path->device->flags |= CAM_DEV_IN_DV;
- xpt_release_ccb(done_ccb);
- PROBE_SET_ACTION(softc, PROBE_INQUIRY_BASIC_DV1);
- xpt_schedule(periph, priority);
- return;
- }
- if (softc->action == PROBE_DV_EXIT) {
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("Leave Domain Validation\n"));
- }
- path->device->flags &=
- ~(CAM_DEV_UNCONFIGURED|CAM_DEV_IN_DV|CAM_DEV_DV_HIT_BOTTOM);
- if ((softc->flags & PROBE_NO_ANNOUNCE) == 0) {
- /* Inform the XPT that a new device has been found */
- done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
- xpt_action(done_ccb);
- xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
- done_ccb);
- }
- xpt_release_ccb(done_ccb);
- break;
- case PROBE_INQUIRY_BASIC_DV1:
- case PROBE_INQUIRY_BASIC_DV2:
- {
- struct scsi_inquiry_data *nbuf;
- struct ccb_scsiio *csio;
-
- if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
- /* Don't wedge the queue */
- xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
- /*run_queue*/TRUE);
- }
- csio = &done_ccb->csio;
- nbuf = (struct scsi_inquiry_data *)csio->data_ptr;
- if (bcmp(nbuf, &path->device->inq_data, SHORT_INQUIRY_LENGTH)) {
- xpt_print(path,
- "inquiry data fails comparison at DV%d step\n",
- softc->action == PROBE_INQUIRY_BASIC_DV1 ? 1 : 2);
- if (proberequestbackoff(periph, path->device)) {
- path->device->flags &= ~CAM_DEV_IN_DV;
- PROBE_SET_ACTION(softc, PROBE_TUR_FOR_NEGOTIATION);
- } else {
- /* give up */
- PROBE_SET_ACTION(softc, PROBE_DV_EXIT);
- }
- free(nbuf, M_CAMXPT);
- xpt_release_ccb(done_ccb);
- xpt_schedule(periph, priority);
- return;
- }
- free(nbuf, M_CAMXPT);
- if (softc->action == PROBE_INQUIRY_BASIC_DV1) {
- PROBE_SET_ACTION(softc, PROBE_INQUIRY_BASIC_DV2);
- xpt_release_ccb(done_ccb);
- xpt_schedule(periph, priority);
- return;
- }
- if (softc->action == PROBE_INQUIRY_BASIC_DV2) {
- CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
- ("Leave Domain Validation Successfully\n"));
- }
- path->device->flags &=
- ~(CAM_DEV_UNCONFIGURED|CAM_DEV_IN_DV|CAM_DEV_DV_HIT_BOTTOM);
- if ((softc->flags & PROBE_NO_ANNOUNCE) == 0) {
- /* Inform the XPT that a new device has been found */
- done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
- xpt_action(done_ccb);
- xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
- done_ccb);
- }
- xpt_release_ccb(done_ccb);
- break;
- }
- case PROBE_INVALID:
- CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_INFO,
- ("probedone: invalid action state\n"));
- default:
- break;
- }
- done_ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
- TAILQ_REMOVE(&softc->request_ccbs, &done_ccb->ccb_h, periph_links.tqe);
- done_ccb->ccb_h.status = CAM_REQ_CMP;
- xpt_done(done_ccb);
- if (TAILQ_FIRST(&softc->request_ccbs) == NULL) {
- cam_periph_invalidate(periph);
- cam_periph_release_locked(periph);
- } else {
- probeschedule(periph);
- }
-}
-
-static void
-probecleanup(struct cam_periph *periph)
-{
- free(periph->softc, M_CAMXPT);
-}
-
-static void
-xpt_find_quirk(struct cam_ed *device)
-{
- caddr_t match;
-
- match = cam_quirkmatch((caddr_t)&device->inq_data,
- (caddr_t)xpt_quirk_table,
- sizeof(xpt_quirk_table)/sizeof(*xpt_quirk_table),
- sizeof(*xpt_quirk_table), scsi_inquiry_match);
-
- if (match == NULL)
- panic("xpt_find_quirk: device didn't match wildcard entry!!");
-
- device->quirk = (struct xpt_quirk_entry *)match;
-}
-
-static int
-sysctl_cam_search_luns(SYSCTL_HANDLER_ARGS)
-{
- int error, bool;
-
- bool = cam_srch_hi;
- error = sysctl_handle_int(oidp, &bool, 0, req);
- if (error != 0 || req->newptr == NULL)
- return (error);
- if (bool == 0 || bool == 1) {
- cam_srch_hi = bool;
- return (0);
- } else {
- return (EINVAL);
- }
-}
-
-
-static void
-xpt_devise_transport(struct cam_path *path)
-{
- struct ccb_pathinq cpi;
- struct ccb_trans_settings cts;
- struct scsi_inquiry_data *inq_buf;
-
- /* Get transport information from the SIM */
- xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
- cpi.ccb_h.func_code = XPT_PATH_INQ;
- xpt_action((union ccb *)&cpi);
-
- inq_buf = NULL;
- if ((path->device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0)
- inq_buf = &path->device->inq_data;
- path->device->protocol = PROTO_SCSI;
- path->device->protocol_version =
- inq_buf != NULL ? SID_ANSI_REV(inq_buf) : cpi.protocol_version;
- path->device->transport = cpi.transport;
- path->device->transport_version = cpi.transport_version;
-
- /*
- * Any device not using SPI3 features should
- * be considered SPI2 or lower.
- */
- if (inq_buf != NULL) {
- if (path->device->transport == XPORT_SPI
- && (inq_buf->spi3data & SID_SPI_MASK) == 0
- && path->device->transport_version > 2)
- path->device->transport_version = 2;
- } else {
- struct cam_ed* otherdev;
-
- for (otherdev = TAILQ_FIRST(&path->target->ed_entries);
- otherdev != NULL;
- otherdev = TAILQ_NEXT(otherdev, links)) {
- if (otherdev != path->device)
- break;
- }
-
- if (otherdev != NULL) {
- /*
- * Initially assume the same versioning as
- * prior luns for this target.
- */
- path->device->protocol_version =
- otherdev->protocol_version;
- path->device->transport_version =
- otherdev->transport_version;
- } else {
- /* Until we know better, opt for safty */
- path->device->protocol_version = 2;
- if (path->device->transport == XPORT_SPI)
- path->device->transport_version = 2;
- else
- path->device->transport_version = 0;
- }
- }
-
- /*
- * XXX
- * For a device compliant with SPC-2 we should be able
- * to determine the transport version supported by
- * scrutinizing the version descriptors in the
- * inquiry buffer.
- */
-
- /* Tell the controller what we think */
- xpt_setup_ccb(&cts.ccb_h, path, /*priority*/1);
- cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
- cts.type = CTS_TYPE_CURRENT_SETTINGS;
- cts.transport = path->device->transport;
- cts.transport_version = path->device->transport_version;
- cts.protocol = path->device->protocol;
- cts.protocol_version = path->device->protocol_version;
- cts.proto_specific.valid = 0;
- cts.xport_specific.valid = 0;
- xpt_action((union ccb *)&cts);
-}
-
-static void
-xpt_set_transfer_settings(struct ccb_trans_settings *cts, struct cam_ed *device,
- int async_update)
-{
- struct ccb_pathinq cpi;
- struct ccb_trans_settings cur_cts;
- struct ccb_trans_settings_scsi *scsi;
- struct ccb_trans_settings_scsi *cur_scsi;
- struct cam_sim *sim;
- struct scsi_inquiry_data *inq_data;
-
- if (device == NULL) {
- cts->ccb_h.status = CAM_PATH_INVALID;
- xpt_done((union ccb *)cts);
- return;
- }
-
- if (cts->protocol == PROTO_UNKNOWN
- || cts->protocol == PROTO_UNSPECIFIED) {
- cts->protocol = device->protocol;
- cts->protocol_version = device->protocol_version;
- }
-
- if (cts->protocol_version == PROTO_VERSION_UNKNOWN
- || cts->protocol_version == PROTO_VERSION_UNSPECIFIED)
- cts->protocol_version = device->protocol_version;
-
- if (cts->protocol != device->protocol) {
- xpt_print(cts->ccb_h.path, "Uninitialized Protocol %x:%x?\n",
- cts->protocol, device->protocol);
- cts->protocol = device->protocol;
- }
-
- if (cts->protocol_version > device->protocol_version) {
- if (bootverbose) {
- xpt_print(cts->ccb_h.path, "Down reving Protocol "
- "Version from %d to %d?\n", cts->protocol_version,
- device->protocol_version);
- }
- cts->protocol_version = device->protocol_version;
- }
-
- if (cts->transport == XPORT_UNKNOWN
- || cts->transport == XPORT_UNSPECIFIED) {
- cts->transport = device->transport;
- cts->transport_version = device->transport_version;
- }
-
- if (cts->transport_version == XPORT_VERSION_UNKNOWN
- || cts->transport_version == XPORT_VERSION_UNSPECIFIED)
- cts->transport_version = device->transport_version;
-
- if (cts->transport != device->transport) {
- xpt_print(cts->ccb_h.path, "Uninitialized Transport %x:%x?\n",
- cts->transport, device->transport);
- cts->transport = device->transport;
- }
-
- if (cts->transport_version > device->transport_version) {
- if (bootverbose) {
- xpt_print(cts->ccb_h.path, "Down reving Transport "
- "Version from %d to %d?\n", cts->transport_version,
- device->transport_version);
- }
- cts->transport_version = device->transport_version;
- }
-
- sim = cts->ccb_h.path->bus->sim;
-
- /*
- * Nothing more of interest to do unless
- * this is a device connected via the
- * SCSI protocol.
- */
- if (cts->protocol != PROTO_SCSI) {
- if (async_update == FALSE)
- (*(sim->sim_action))(sim, (union ccb *)cts);
- return;
- }
-
- inq_data = &device->inq_data;
- scsi = &cts->proto_specific.scsi;
- xpt_setup_ccb(&cpi.ccb_h, cts->ccb_h.path, /*priority*/1);
- cpi.ccb_h.func_code = XPT_PATH_INQ;
- xpt_action((union ccb *)&cpi);
-
- /* SCSI specific sanity checking */
- if ((cpi.hba_inquiry & PI_TAG_ABLE) == 0
- || (INQ_DATA_TQ_ENABLED(inq_data)) == 0
- || (device->queue_flags & SCP_QUEUE_DQUE) != 0
- || (device->quirk->mintags == 0)) {
- /*
- * Can't tag on hardware that doesn't support tags,
- * doesn't have it enabled, or has broken tag support.
- */
- scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
- }
-
- if (async_update == FALSE) {
- /*
- * Perform sanity checking against what the
- * controller and device can do.
- */
- xpt_setup_ccb(&cur_cts.ccb_h, cts->ccb_h.path, /*priority*/1);
- cur_cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
- cur_cts.type = cts->type;
- xpt_action((union ccb *)&cur_cts);
- if ((cur_cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
- return;
- }
- cur_scsi = &cur_cts.proto_specific.scsi;
- if ((scsi->valid & CTS_SCSI_VALID_TQ) == 0) {
- scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
- scsi->flags |= cur_scsi->flags & CTS_SCSI_FLAGS_TAG_ENB;
- }
- if ((cur_scsi->valid & CTS_SCSI_VALID_TQ) == 0)
- scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
- }
-
- /* SPI specific sanity checking */
- if (cts->transport == XPORT_SPI && async_update == FALSE) {
- u_int spi3caps;
- struct ccb_trans_settings_spi *spi;
- struct ccb_trans_settings_spi *cur_spi;
-
- spi = &cts->xport_specific.spi;
-
- cur_spi = &cur_cts.xport_specific.spi;
-
- /* Fill in any gaps in what the user gave us */
- if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
- spi->sync_period = cur_spi->sync_period;
- if ((cur_spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
- spi->sync_period = 0;
- if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
- spi->sync_offset = cur_spi->sync_offset;
- if ((cur_spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
- spi->sync_offset = 0;
- if ((spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
- spi->ppr_options = cur_spi->ppr_options;
- if ((cur_spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
- spi->ppr_options = 0;
- if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
- spi->bus_width = cur_spi->bus_width;
- if ((cur_spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
- spi->bus_width = 0;
- if ((spi->valid & CTS_SPI_VALID_DISC) == 0) {
- spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
- spi->flags |= cur_spi->flags & CTS_SPI_FLAGS_DISC_ENB;
- }
- if ((cur_spi->valid & CTS_SPI_VALID_DISC) == 0)
- spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
- if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
- && (inq_data->flags & SID_Sync) == 0
- && cts->type == CTS_TYPE_CURRENT_SETTINGS)
- || ((cpi.hba_inquiry & PI_SDTR_ABLE) == 0)) {
- /* Force async */
- spi->sync_period = 0;
- spi->sync_offset = 0;
- }
-
- switch (spi->bus_width) {
- case MSG_EXT_WDTR_BUS_32_BIT:
- if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
- || (inq_data->flags & SID_WBus32) != 0
- || cts->type == CTS_TYPE_USER_SETTINGS)
- && (cpi.hba_inquiry & PI_WIDE_32) != 0)
- break;
- /* Fall Through to 16-bit */
- case MSG_EXT_WDTR_BUS_16_BIT:
- if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
- || (inq_data->flags & SID_WBus16) != 0
- || cts->type == CTS_TYPE_USER_SETTINGS)
- && (cpi.hba_inquiry & PI_WIDE_16) != 0) {
- spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
- break;
- }
- /* Fall Through to 8-bit */
- default: /* New bus width?? */
- case MSG_EXT_WDTR_BUS_8_BIT:
- /* All targets can do this */
- spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
- break;
- }
-
- spi3caps = cpi.xport_specific.spi.ppr_options;
- if ((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
- && cts->type == CTS_TYPE_CURRENT_SETTINGS)
- spi3caps &= inq_data->spi3data;
-
- if ((spi3caps & SID_SPI_CLOCK_DT) == 0)
- spi->ppr_options &= ~MSG_EXT_PPR_DT_REQ;
-
- if ((spi3caps & SID_SPI_IUS) == 0)
- spi->ppr_options &= ~MSG_EXT_PPR_IU_REQ;
-
- if ((spi3caps & SID_SPI_QAS) == 0)
- spi->ppr_options &= ~MSG_EXT_PPR_QAS_REQ;
-
- /* No SPI Transfer settings are allowed unless we are wide */
- if (spi->bus_width == 0)
- spi->ppr_options = 0;
-
- if ((spi->valid & CTS_SPI_VALID_DISC)
- && ((spi->flags & CTS_SPI_FLAGS_DISC_ENB) == 0)) {
- /*
- * Can't tag queue without disconnection.
- */
- scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
- scsi->valid |= CTS_SCSI_VALID_TQ;
- }
-
- /*
- * If we are currently performing tagged transactions to
- * this device and want to change its negotiation parameters,
- * go non-tagged for a bit to give the controller a chance to
- * negotiate unhampered by tag messages.
- */
- if (cts->type == CTS_TYPE_CURRENT_SETTINGS
- && (device->inq_flags & SID_CmdQue) != 0
- && (scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
- && (spi->flags & (CTS_SPI_VALID_SYNC_RATE|
- CTS_SPI_VALID_SYNC_OFFSET|
- CTS_SPI_VALID_BUS_WIDTH)) != 0)
- xpt_toggle_tags(cts->ccb_h.path);
- }
-
- if (cts->type == CTS_TYPE_CURRENT_SETTINGS
- && (scsi->valid & CTS_SCSI_VALID_TQ) != 0) {
- int device_tagenb;
-
- /*
- * If we are transitioning from tags to no-tags or
- * vice-versa, we need to carefully freeze and restart
- * the queue so that we don't overlap tagged and non-tagged
- * commands. We also temporarily stop tags if there is
- * a change in transfer negotiation settings to allow
- * "tag-less" negotiation.
- */
- if ((device->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
- || (device->inq_flags & SID_CmdQue) != 0)
- device_tagenb = TRUE;
- else
- device_tagenb = FALSE;
-
- if (((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
- && device_tagenb == FALSE)
- || ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) == 0
- && device_tagenb == TRUE)) {
-
- if ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0) {
- /*
- * Delay change to use tags until after a
- * few commands have gone to this device so
- * the controller has time to perform transfer
- * negotiations without tagged messages getting
- * in the way.
- */
- device->tag_delay_count = CAM_TAG_DELAY_COUNT;
- device->flags |= CAM_DEV_TAG_AFTER_COUNT;
- } else {
- struct ccb_relsim crs;
-
- xpt_freeze_devq(cts->ccb_h.path, /*count*/1);
- device->inq_flags &= ~SID_CmdQue;
- xpt_dev_ccbq_resize(cts->ccb_h.path,
- sim->max_dev_openings);
- device->flags &= ~CAM_DEV_TAG_AFTER_COUNT;
- device->tag_delay_count = 0;
-
- xpt_setup_ccb(&crs.ccb_h, cts->ccb_h.path,
- /*priority*/1);
- crs.ccb_h.func_code = XPT_REL_SIMQ;
- crs.release_flags = RELSIM_RELEASE_AFTER_QEMPTY;
- crs.openings
- = crs.release_timeout
- = crs.qfrozen_cnt
- = 0;
- xpt_action((union ccb *)&crs);
- }
- }
- }
- if (async_update == FALSE)
- (*(sim->sim_action))(sim, (union ccb *)cts);
-}
-
-
-static void
-xpt_toggle_tags(struct cam_path *path)
-{
- struct cam_ed *dev;
-
- /*
- * Give controllers a chance to renegotiate
- * before starting tag operations. We
- * "toggle" tagged queuing off then on
- * which causes the tag enable command delay
- * counter to come into effect.
- */
- dev = path->device;
- if ((dev->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
- || ((dev->inq_flags & SID_CmdQue) != 0
- && (dev->inq_flags & (SID_Sync|SID_WBus16|SID_WBus32)) != 0)) {
- struct ccb_trans_settings cts;
-
- xpt_setup_ccb(&cts.ccb_h, path, 1);
- cts.protocol = PROTO_SCSI;
- cts.protocol_version = PROTO_VERSION_UNSPECIFIED;
- cts.transport = XPORT_UNSPECIFIED;
- cts.transport_version = XPORT_VERSION_UNSPECIFIED;
- cts.proto_specific.scsi.flags = 0;
- cts.proto_specific.scsi.valid = CTS_SCSI_VALID_TQ;
- xpt_set_transfer_settings(&cts, path->device,
- /*async_update*/TRUE);
- cts.proto_specific.scsi.flags = CTS_SCSI_FLAGS_TAG_ENB;
- xpt_set_transfer_settings(&cts, path->device,
- /*async_update*/TRUE);
- }
-}
-
static void
xpt_start_tags(struct cam_path *path)
{
@@ -6861,7 +4532,7 @@ xpt_start_tags(struct cam_path *path)
if (device->tag_saved_openings != 0)
newopenings = device->tag_saved_openings;
else
- newopenings = min(device->quirk->maxtags,
+ newopenings = min(device->maxtags,
sim->max_tagged_dev_openings);
xpt_dev_ccbq_resize(path, newopenings);
xpt_setup_ccb(&crs.ccb_h, path, /*priority*/1);
diff --git a/sys/cam/cam_xpt.h b/sys/cam/cam_xpt.h
index 26ca65769d12..283cad1c34e6 100644
--- a/sys/cam/cam_xpt.h
+++ b/sys/cam/cam_xpt.h
@@ -48,7 +48,45 @@ struct cam_path;
#ifdef _KERNEL
+/*
+ * Definition of an async handler callback block. These are used to add
+ * SIMs and peripherals to the async callback lists.
+ */
+struct async_node {
+ SLIST_ENTRY(async_node) links;
+ u_int32_t event_enable; /* Async Event enables */
+ void (*callback)(void *arg, u_int32_t code,
+ struct cam_path *path, void *args);
+ void *callback_arg;
+};
+
+SLIST_HEAD(async_list, async_node);
+SLIST_HEAD(periph_list, cam_periph);
+
+#if defined(CAM_DEBUG_FLAGS) && !defined(CAMDEBUG)
+#error "You must have options CAMDEBUG to use options CAM_DEBUG_FLAGS"
+#endif
+
+/*
+ * In order to enable the CAM_DEBUG_* options, the user must have CAMDEBUG
+ * enabled. Also, the user must have either none, or all of CAM_DEBUG_BUS,
+ * CAM_DEBUG_TARGET, and CAM_DEBUG_LUN specified.
+ */
+#if defined(CAM_DEBUG_BUS) || defined(CAM_DEBUG_TARGET) \
+ || defined(CAM_DEBUG_LUN)
+#ifdef CAMDEBUG
+#if !defined(CAM_DEBUG_BUS) || !defined(CAM_DEBUG_TARGET) \
+ || !defined(CAM_DEBUG_LUN)
+#error "You must define all or none of CAM_DEBUG_BUS, CAM_DEBUG_TARGET \
+ and CAM_DEBUG_LUN"
+#endif /* !CAM_DEBUG_BUS || !CAM_DEBUG_TARGET || !CAM_DEBUG_LUN */
+#else /* !CAMDEBUG */
+#error "You must use options CAMDEBUG if you use the CAM_DEBUG_* options"
+#endif /* CAMDEBUG */
+#endif /* CAM_DEBUG_BUS || CAM_DEBUG_TARGET || CAM_DEBUG_LUN */
+
void xpt_action(union ccb *new_ccb);
+void xpt_action_default(union ccb *new_ccb);
void xpt_setup_ccb(struct ccb_hdr *ccb_h,
struct cam_path *path,
u_int32_t priority);
@@ -81,6 +119,14 @@ void xpt_lock_buses(void);
void xpt_unlock_buses(void);
cam_status xpt_register_async(int event, ac_callback_t *cbfunc,
void *cbarg, struct cam_path *path);
+cam_status xpt_compile_path(struct cam_path *new_path,
+ struct cam_periph *perph,
+ path_id_t path_id,
+ target_id_t target_id,
+ lun_id_t lun_id);
+
+void xpt_release_path(struct cam_path *path);
+
#endif /* _KERNEL */
#endif /* _CAM_CAM_XPT_H */
diff --git a/sys/cam/cam_xpt_internal.h b/sys/cam/cam_xpt_internal.h
new file mode 100644
index 000000000000..12c5c2f556f6
--- /dev/null
+++ b/sys/cam/cam_xpt_internal.h
@@ -0,0 +1,205 @@
+/*-
+ * Copyright 2009 Scott Long
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions, and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * $FreeBSD$
+ */
+
+#ifndef _CAM_CAM_XPT_INTERNAL_H
+#define _CAM_CAM_XPT_INTERNAL_H 1
+
+/* Forward Declarations */
+struct cam_eb;
+struct cam_et;
+struct cam_ed;
+
+typedef struct cam_ed * (*xpt_alloc_device_func)(struct cam_eb *bus,
+ struct cam_et *target,
+ lun_id_t lun_id);
+typedef void (*xpt_release_device_func)(struct cam_eb *bus,
+ struct cam_et *target,
+ struct cam_ed *device);
+typedef void (*xpt_action_func)(union ccb *start_ccb);
+typedef void (*xpt_dev_async_func)(u_int32_t async_code,
+ struct cam_eb *bus,
+ struct cam_et *target,
+ struct cam_ed *device,
+ void *async_arg);
+typedef void (*xpt_announce_periph_func)(struct cam_periph *periph,
+ char *announce_string);
+
+struct xpt_xport {
+ xpt_alloc_device_func alloc_device;
+ xpt_release_device_func reldev;
+ xpt_action_func action;
+ xpt_dev_async_func async;
+ xpt_announce_periph_func announce;
+};
+
+/*
+ * Structure for queueing a device in a run queue.
+ * There is one run queue for allocating new ccbs,
+ * and another for sending ccbs to the controller.
+ */
+struct cam_ed_qinfo {
+ cam_pinfo pinfo;
+ struct cam_ed *device;
+};
+
+/*
+ * The CAM EDT (Existing Device Table) contains the device information for
+ * all devices for all busses in the system. The table contains a
+ * cam_ed structure for each device on the bus.
+ */
+struct cam_ed {
+ TAILQ_ENTRY(cam_ed) links;
+ struct cam_ed_qinfo alloc_ccb_entry;
+ struct cam_ed_qinfo send_ccb_entry;
+ struct cam_et *target;
+ struct cam_sim *sim;
+ lun_id_t lun_id;
+ struct camq drvq; /*
+ * Queue of type drivers wanting to do
+ * work on this device.
+ */
+ struct cam_ccbq ccbq; /* Queue of pending ccbs */
+ struct async_list asyncs; /* Async callback info for this B/T/L */
+ struct periph_list periphs; /* All attached devices */
+ u_int generation; /* Generation number */
+ struct cam_periph *owner; /* Peripheral driver's ownership tag */
+ void *quirk; /* Oddities about this device */
+ u_int maxtags;
+ u_int mintags;
+ cam_proto protocol;
+ u_int protocol_version;
+ cam_xport transport;
+ u_int transport_version;
+ struct scsi_inquiry_data inq_data;
+ struct ata_params ident_data;
+ u_int8_t inq_flags; /*
+ * Current settings for inquiry flags.
+ * This allows us to override settings
+ * like disconnection and tagged
+ * queuing for a device.
+ */
+ u_int8_t queue_flags; /* Queue flags from the control page */
+ u_int8_t serial_num_len;
+ u_int8_t *serial_num;
+ u_int32_t qfrozen_cnt;
+ u_int32_t flags;
+#define CAM_DEV_UNCONFIGURED 0x01
+#define CAM_DEV_REL_TIMEOUT_PENDING 0x02
+#define CAM_DEV_REL_ON_COMPLETE 0x04
+#define CAM_DEV_REL_ON_QUEUE_EMPTY 0x08
+#define CAM_DEV_RESIZE_QUEUE_NEEDED 0x10
+#define CAM_DEV_TAG_AFTER_COUNT 0x20
+#define CAM_DEV_INQUIRY_DATA_VALID 0x40
+#define CAM_DEV_IN_DV 0x80
+#define CAM_DEV_DV_HIT_BOTTOM 0x100
+ u_int32_t tag_delay_count;
+#define CAM_TAG_DELAY_COUNT 5
+ u_int32_t tag_saved_openings;
+ u_int32_t refcount;
+ struct callout callout;
+};
+
+/*
+ * Each target is represented by an ET (Existing Target). These
+ * entries are created when a target is successfully probed with an
+ * identify, and removed when a device fails to respond after a number
+ * of retries, or a bus rescan finds the device missing.
+ */
+struct cam_et {
+ TAILQ_HEAD(, cam_ed) ed_entries;
+ TAILQ_ENTRY(cam_et) links;
+ struct cam_eb *bus;
+ target_id_t target_id;
+ u_int32_t refcount;
+ u_int generation;
+ struct timeval last_reset;
+};
+
+/*
+ * Each bus is represented by an EB (Existing Bus). These entries
+ * are created by calls to xpt_bus_register and deleted by calls to
+ * xpt_bus_deregister.
+ */
+struct cam_eb {
+ TAILQ_HEAD(, cam_et) et_entries;
+ TAILQ_ENTRY(cam_eb) links;
+ path_id_t path_id;
+ struct cam_sim *sim;
+ struct timeval last_reset;
+ u_int32_t flags;
+#define CAM_EB_RUNQ_SCHEDULED 0x01
+ u_int32_t refcount;
+ u_int generation;
+ device_t parent_dev;
+ struct xpt_xport *xport;
+};
+
+struct cam_path {
+ struct cam_periph *periph;
+ struct cam_eb *bus;
+ struct cam_et *target;
+ struct cam_ed *device;
+};
+
+struct xpt_xport * scsi_get_xport(void);
+struct xpt_xport * ata_get_xport(void);
+
+struct cam_ed * xpt_alloc_device(struct cam_eb *bus,
+ struct cam_et *target,
+ lun_id_t lun_id);
+void xpt_run_dev_sendq(struct cam_eb *bus);
+int xpt_schedule_dev(struct camq *queue, cam_pinfo *dev_pinfo,
+ u_int32_t new_priority);
+u_int32_t xpt_dev_ccbq_resize(struct cam_path *path, int newopenings);
+
+
+
+static __inline int
+xpt_schedule_dev_sendq(struct cam_eb *bus, struct cam_ed *dev)
+{
+ int retval;
+
+ if (dev->ccbq.dev_openings > 0) {
+ /*
+ * The priority of a device waiting for controller
+ * resources is that of the the highest priority CCB
+ * enqueued.
+ */
+ retval =
+ xpt_schedule_dev(&bus->sim->devq->send_queue,
+ &dev->send_ccb_entry.pinfo,
+ CAMQ_GET_HEAD(&dev->ccbq.queue)->priority);
+ } else {
+ retval = 0;
+ }
+ return (retval);
+}
+
+MALLOC_DECLARE(M_CAMXPT);
+
+#endif
diff --git a/sys/cam/cam_xpt_periph.h b/sys/cam/cam_xpt_periph.h
index c6b8cc26748f..dbfb55eb7576 100644
--- a/sys/cam/cam_xpt_periph.h
+++ b/sys/cam/cam_xpt_periph.h
@@ -33,6 +33,7 @@
#ifndef _CAM_CAM_XPT_PERIPH_H
#define _CAM_CAM_XPT_PERIPH_H 1
+#include <cam/cam_queue.h>
#include <cam/cam_xpt.h>
/* Functions accessed by the peripheral drivers */
diff --git a/sys/cam/scsi/scsi_all.c b/sys/cam/scsi/scsi_all.c
index b0c487d09f4d..d4db50bee4c7 100644
--- a/sys/cam/scsi/scsi_all.c
+++ b/sys/cam/scsi/scsi_all.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
#include <cam/cam.h>
#include <cam/cam_ccb.h>
+#include <cam/cam_queue.h>
#include <cam/cam_xpt.h>
#include <cam/scsi/scsi_all.h>
#include <sys/sbuf.h>
diff --git a/sys/cam/scsi/scsi_cd.c b/sys/cam/scsi/scsi_cd.c
index a095d8225d1c..287c6b626f32 100644
--- a/sys/cam/scsi/scsi_cd.c
+++ b/sys/cam/scsi/scsi_cd.c
@@ -496,6 +496,9 @@ cdasync(void *callback_arg, u_int32_t code,
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
if (SID_TYPE(&cgd->inq_data) != T_CDROM
&& SID_TYPE(&cgd->inq_data) != T_WORM)
break;
diff --git a/sys/cam/scsi/scsi_ch.c b/sys/cam/scsi/scsi_ch.c
index 892deac9b7c0..f8f39aba57c5 100644
--- a/sys/cam/scsi/scsi_ch.c
+++ b/sys/cam/scsi/scsi_ch.c
@@ -287,6 +287,9 @@ chasync(void *callback_arg, u_int32_t code, struct cam_path *path, void *arg)
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
if (SID_TYPE(&cgd->inq_data)!= T_CHANGER)
break;
diff --git a/sys/cam/scsi/scsi_da.c b/sys/cam/scsi/scsi_da.c
index beb517e4ee9e..f5ec895909be 100644
--- a/sys/cam/scsi/scsi_da.c
+++ b/sys/cam/scsi/scsi_da.c
@@ -1028,6 +1028,9 @@ daasync(void *callback_arg, u_int32_t code,
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
if (SID_TYPE(&cgd->inq_data) != T_DIRECT
&& SID_TYPE(&cgd->inq_data) != T_RBC
&& SID_TYPE(&cgd->inq_data) != T_OPTICAL)
@@ -1195,6 +1198,7 @@ daregister(struct cam_periph *periph, void *arg)
softc->quirks = DA_Q_NONE;
/* Check if the SIM does not want 6 byte commands */
+ bzero(&cpi, sizeof(cpi));
xpt_setup_ccb(&cpi.ccb_h, periph->path, /*priority*/1);
cpi.ccb_h.func_code = XPT_PATH_INQ;
xpt_action((union ccb *)&cpi);
@@ -1244,7 +1248,12 @@ daregister(struct cam_periph *periph, void *arg)
softc->disk->d_dump = dadump;
softc->disk->d_name = "da";
softc->disk->d_drv1 = periph;
- softc->disk->d_maxsize = DFLTPHYS; /* XXX: probably not arbitrary */
+ if (cpi.maxio == 0)
+ softc->disk->d_maxsize = DFLTPHYS; /* traditional default */
+ else if (cpi.maxio > MAXPHYS)
+ softc->disk->d_maxsize = MAXPHYS; /* for safety */
+ else
+ softc->disk->d_maxsize = cpi.maxio;
softc->disk->d_unit = periph->unit_number;
softc->disk->d_flags = 0;
if ((softc->quirks & DA_Q_NO_SYNC_CACHE) == 0)
diff --git a/sys/cam/scsi/scsi_pass.c b/sys/cam/scsi/scsi_pass.c
index 4e4e48fa4b88..755189183754 100644
--- a/sys/cam/scsi/scsi_pass.c
+++ b/sys/cam/scsi/scsi_pass.c
@@ -528,7 +528,8 @@ passsendccb(struct cam_periph *periph, union ccb *ccb, union ccb *inccb)
* ready), it will save a few cycles if we check for it here.
*/
if (((ccb->ccb_h.flags & CAM_DATA_PHYS) == 0)
- && (((ccb->ccb_h.func_code == XPT_SCSI_IO)
+ && (((ccb->ccb_h.func_code == XPT_SCSI_IO ||
+ ccb->ccb_h.func_code == XPT_ATA_IO)
&& ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE))
|| (ccb->ccb_h.func_code == XPT_DEV_MATCH))) {
diff --git a/sys/cam/scsi/scsi_pt.c b/sys/cam/scsi/scsi_pt.c
index c41d5e9079b5..183293fc7c7a 100644
--- a/sys/cam/scsi/scsi_pt.c
+++ b/sys/cam/scsi/scsi_pt.c
@@ -366,6 +366,9 @@ ptasync(void *callback_arg, u_int32_t code, struct cam_path *path, void *arg)
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
if (SID_TYPE(&cgd->inq_data) != T_PROCESSOR)
break;
diff --git a/sys/cam/scsi/scsi_sa.c b/sys/cam/scsi/scsi_sa.c
index 7ce5d559a5e0..254f2ba692ff 100644
--- a/sys/cam/scsi/scsi_sa.c
+++ b/sys/cam/scsi/scsi_sa.c
@@ -1398,6 +1398,9 @@ saasync(void *callback_arg, u_int32_t code,
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
if (SID_TYPE(&cgd->inq_data) != T_SEQUENTIAL)
break;
diff --git a/sys/cam/scsi/scsi_ses.c b/sys/cam/scsi/scsi_ses.c
index 87c12b490be3..825b883c17ca 100644
--- a/sys/cam/scsi/scsi_ses.c
+++ b/sys/cam/scsi/scsi_ses.c
@@ -251,6 +251,9 @@ sesasync(void *callback_arg, uint32_t code, struct cam_path *path, void *arg)
break;
}
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
inq_len = cgd->inq_data.additional_length + 4;
/*
diff --git a/sys/cam/scsi/scsi_sg.c b/sys/cam/scsi/scsi_sg.c
index acdb404378c5..4ab038bc2ee9 100644
--- a/sys/cam/scsi/scsi_sg.c
+++ b/sys/cam/scsi/scsi_sg.c
@@ -226,6 +226,9 @@ sgasync(void *callback_arg, uint32_t code, struct cam_path *path, void *arg)
if (cgd == NULL)
break;
+ if (cgd->protocol != PROTO_SCSI)
+ break;
+
/*
* Allocate a peripheral instance for this device and
* start the probe process.
diff --git a/sys/cam/scsi/scsi_xpt.c b/sys/cam/scsi/scsi_xpt.c
new file mode 100644
index 000000000000..8fcb457c8d0b
--- /dev/null
+++ b/sys/cam/scsi/scsi_xpt.c
@@ -0,0 +1,2382 @@
+/*-
+ * Implementation of the SCSI Transport
+ *
+ * Copyright (c) 1997, 1998, 1999 Justin T. Gibbs.
+ * Copyright (c) 1997, 1998, 1999 Kenneth D. Merry.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions, and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
+ * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+#include <sys/bus.h>
+#include <sys/systm.h>
+#include <sys/types.h>
+#include <sys/malloc.h>
+#include <sys/kernel.h>
+#include <sys/time.h>
+#include <sys/conf.h>
+#include <sys/fcntl.h>
+#include <sys/md5.h>
+#include <sys/interrupt.h>
+#include <sys/sbuf.h>
+
+#include <sys/lock.h>
+#include <sys/mutex.h>
+#include <sys/sysctl.h>
+
+#ifdef PC98
+#include <pc98/pc98/pc98_machdep.h> /* geometry translation */
+#endif
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_queue.h>
+#include <cam/cam_periph.h>
+#include <cam/cam_sim.h>
+#include <cam/cam_xpt.h>
+#include <cam/cam_xpt_sim.h>
+#include <cam/cam_xpt_periph.h>
+#include <cam/cam_xpt_internal.h>
+#include <cam/cam_debug.h>
+
+#include <cam/scsi/scsi_all.h>
+#include <cam/scsi/scsi_message.h>
+#include <cam/scsi/scsi_pass.h>
+#include <machine/stdarg.h> /* for xpt_print below */
+#include "opt_cam.h"
+
+struct scsi_quirk_entry {
+ struct scsi_inquiry_pattern inq_pat;
+ u_int8_t quirks;
+#define CAM_QUIRK_NOLUNS 0x01
+#define CAM_QUIRK_NOSERIAL 0x02
+#define CAM_QUIRK_HILUNS 0x04
+#define CAM_QUIRK_NOHILUNS 0x08
+ u_int mintags;
+ u_int maxtags;
+};
+#define SCSI_QUIRK(dev) ((struct scsi_quirk_entry *)((dev)->quirk))
+
+static int cam_srch_hi = 0;
+TUNABLE_INT("kern.cam.cam_srch_hi", &cam_srch_hi);
+static int sysctl_cam_search_luns(SYSCTL_HANDLER_ARGS);
+SYSCTL_PROC(_kern_cam, OID_AUTO, cam_srch_hi, CTLTYPE_INT|CTLFLAG_RW, 0, 0,
+ sysctl_cam_search_luns, "I",
+ "allow search above LUN 7 for SCSI3 and greater devices");
+
+#define CAM_SCSI2_MAXLUN 8
+/*
+ * If we're not quirked to search <= the first 8 luns
+ * and we are either quirked to search above lun 8,
+ * or we're > SCSI-2 and we've enabled hilun searching,
+ * or we're > SCSI-2 and the last lun was a success,
+ * we can look for luns above lun 8.
+ */
+#define CAN_SRCH_HI_SPARSE(dv) \
+ (((SCSI_QUIRK(dv)->quirks & CAM_QUIRK_NOHILUNS) == 0) \
+ && ((SCSI_QUIRK(dv)->quirks & CAM_QUIRK_HILUNS) \
+ || (SID_ANSI_REV(&dv->inq_data) > SCSI_REV_2 && cam_srch_hi)))
+
+#define CAN_SRCH_HI_DENSE(dv) \
+ (((SCSI_QUIRK(dv)->quirks & CAM_QUIRK_NOHILUNS) == 0) \
+ && ((SCSI_QUIRK(dv)->quirks & CAM_QUIRK_HILUNS) \
+ || (SID_ANSI_REV(&dv->inq_data) > SCSI_REV_2)))
+
+static periph_init_t probe_periph_init;
+
+static struct periph_driver probe_driver =
+{
+ probe_periph_init, "probe",
+ TAILQ_HEAD_INITIALIZER(probe_driver.units)
+};
+
+PERIPHDRIVER_DECLARE(probe, probe_driver);
+
+typedef enum {
+ PROBE_TUR,
+ PROBE_INQUIRY, /* this counts as DV0 for Basic Domain Validation */
+ PROBE_FULL_INQUIRY,
+ PROBE_MODE_SENSE,
+ PROBE_SERIAL_NUM_0,
+ PROBE_SERIAL_NUM_1,
+ PROBE_TUR_FOR_NEGOTIATION,
+ PROBE_INQUIRY_BASIC_DV1,
+ PROBE_INQUIRY_BASIC_DV2,
+ PROBE_DV_EXIT,
+ PROBE_INVALID
+} probe_action;
+
+static char *probe_action_text[] = {
+ "PROBE_TUR",
+ "PROBE_INQUIRY",
+ "PROBE_FULL_INQUIRY",
+ "PROBE_MODE_SENSE",
+ "PROBE_SERIAL_NUM_0",
+ "PROBE_SERIAL_NUM_1",
+ "PROBE_TUR_FOR_NEGOTIATION",
+ "PROBE_INQUIRY_BASIC_DV1",
+ "PROBE_INQUIRY_BASIC_DV2",
+ "PROBE_DV_EXIT",
+ "PROBE_INVALID"
+};
+
+#define PROBE_SET_ACTION(softc, newaction) \
+do { \
+ char **text; \
+ text = probe_action_text; \
+ CAM_DEBUG((softc)->periph->path, CAM_DEBUG_INFO, \
+ ("Probe %s to %s\n", text[(softc)->action], \
+ text[(newaction)])); \
+ (softc)->action = (newaction); \
+} while(0)
+
+typedef enum {
+ PROBE_INQUIRY_CKSUM = 0x01,
+ PROBE_SERIAL_CKSUM = 0x02,
+ PROBE_NO_ANNOUNCE = 0x04
+} probe_flags;
+
+typedef struct {
+ TAILQ_HEAD(, ccb_hdr) request_ccbs;
+ probe_action action;
+ union ccb saved_ccb;
+ probe_flags flags;
+ MD5_CTX context;
+ u_int8_t digest[16];
+ struct cam_periph *periph;
+} probe_softc;
+
+static const char quantum[] = "QUANTUM";
+static const char sony[] = "SONY";
+static const char west_digital[] = "WDIGTL";
+static const char samsung[] = "SAMSUNG";
+static const char seagate[] = "SEAGATE";
+static const char microp[] = "MICROP";
+
+static struct scsi_quirk_entry scsi_quirk_table[] =
+{
+ {
+ /* Reports QUEUE FULL for temporary resource shortages */
+ { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP39100*", "*" },
+ /*quirks*/0, /*mintags*/24, /*maxtags*/32
+ },
+ {
+ /* Reports QUEUE FULL for temporary resource shortages */
+ { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP34550*", "*" },
+ /*quirks*/0, /*mintags*/24, /*maxtags*/32
+ },
+ {
+ /* Reports QUEUE FULL for temporary resource shortages */
+ { T_DIRECT, SIP_MEDIA_FIXED, quantum, "XP32275*", "*" },
+ /*quirks*/0, /*mintags*/24, /*maxtags*/32
+ },
+ {
+ /* Broken tagged queuing drive */
+ { T_DIRECT, SIP_MEDIA_FIXED, microp, "4421-07*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Broken tagged queuing drive */
+ { T_DIRECT, SIP_MEDIA_FIXED, "HP", "C372*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Broken tagged queuing drive */
+ { T_DIRECT, SIP_MEDIA_FIXED, microp, "3391*", "x43h" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Unfortunately, the Quantum Atlas III has the same
+ * problem as the Atlas II drives above.
+ * Reported by: "Johan Granlund" <johan@granlund.nu>
+ *
+ * For future reference, the drive with the problem was:
+ * QUANTUM QM39100TD-SW N1B0
+ *
+ * It's possible that Quantum will fix the problem in later
+ * firmware revisions. If that happens, the quirk entry
+ * will need to be made specific to the firmware revisions
+ * with the problem.
+ *
+ */
+ /* Reports QUEUE FULL for temporary resource shortages */
+ { T_DIRECT, SIP_MEDIA_FIXED, quantum, "QM39100*", "*" },
+ /*quirks*/0, /*mintags*/24, /*maxtags*/32
+ },
+ {
+ /*
+ * 18 Gig Atlas III, same problem as the 9G version.
+ * Reported by: Andre Albsmeier
+ * <andre.albsmeier@mchp.siemens.de>
+ *
+ * For future reference, the drive with the problem was:
+ * QUANTUM QM318000TD-S N491
+ */
+ /* Reports QUEUE FULL for temporary resource shortages */
+ { T_DIRECT, SIP_MEDIA_FIXED, quantum, "QM318000*", "*" },
+ /*quirks*/0, /*mintags*/24, /*maxtags*/32
+ },
+ {
+ /*
+ * Broken tagged queuing drive
+ * Reported by: Bret Ford <bford@uop.cs.uop.edu>
+ * and: Martin Renters <martin@tdc.on.ca>
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST410800*", "71*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ /*
+ * The Seagate Medalist Pro drives have very poor write
+ * performance with anything more than 2 tags.
+ *
+ * Reported by: Paul van der Zwan <paulz@trantor.xs4all.nl>
+ * Drive: <SEAGATE ST36530N 1444>
+ *
+ * Reported by: Jeremy Lea <reg@shale.csir.co.za>
+ * Drive: <SEAGATE ST34520W 1281>
+ *
+ * No one has actually reported that the 9G version
+ * (ST39140*) of the Medalist Pro has the same problem, but
+ * we're assuming that it does because the 4G and 6.5G
+ * versions of the drive are broken.
+ */
+ {
+ { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST34520*", "*"},
+ /*quirks*/0, /*mintags*/2, /*maxtags*/2
+ },
+ {
+ { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST36530*", "*"},
+ /*quirks*/0, /*mintags*/2, /*maxtags*/2
+ },
+ {
+ { T_DIRECT, SIP_MEDIA_FIXED, seagate, "ST39140*", "*"},
+ /*quirks*/0, /*mintags*/2, /*maxtags*/2
+ },
+ {
+ /*
+ * Slow when tagged queueing is enabled. Write performance
+ * steadily drops off with more and more concurrent
+ * transactions. Best sequential write performance with
+ * tagged queueing turned off and write caching turned on.
+ *
+ * PR: kern/10398
+ * Submitted by: Hideaki Okada <hokada@isl.melco.co.jp>
+ * Drive: DCAS-34330 w/ "S65A" firmware.
+ *
+ * The drive with the problem had the "S65A" firmware
+ * revision, and has also been reported (by Stephen J.
+ * Roznowski <sjr@home.net>) for a drive with the "S61A"
+ * firmware revision.
+ *
+ * Although no one has reported problems with the 2 gig
+ * version of the DCAS drive, the assumption is that it
+ * has the same problems as the 4 gig version. Therefore
+ * this quirk entries disables tagged queueing for all
+ * DCAS drives.
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, "IBM", "DCAS*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Broken tagged queuing drive */
+ { T_DIRECT, SIP_MEDIA_REMOVABLE, "iomega", "jaz*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Broken tagged queuing drive */
+ { T_DIRECT, SIP_MEDIA_FIXED, "CONNER", "CFP2107*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* This does not support other than LUN 0 */
+ { T_DIRECT, SIP_MEDIA_FIXED, "VMware*", "*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/2, /*maxtags*/255
+ },
+ {
+ /*
+ * Broken tagged queuing drive.
+ * Submitted by:
+ * NAKAJI Hiroyuki <nakaji@zeisei.dpri.kyoto-u.ac.jp>
+ * in PR kern/9535
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, samsung, "WN34324U*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Slow when tagged queueing is enabled. (1.5MB/sec versus
+ * 8MB/sec.)
+ * Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
+ * Best performance with these drives is achieved with
+ * tagged queueing turned off, and write caching turned on.
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, west_digital, "WDE*", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Slow when tagged queueing is enabled. (1.5MB/sec versus
+ * 8MB/sec.)
+ * Submitted by: Andrew Gallatin <gallatin@cs.duke.edu>
+ * Best performance with these drives is achieved with
+ * tagged queueing turned off, and write caching turned on.
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, west_digital, "ENTERPRISE", "*" },
+ /*quirks*/0, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Doesn't handle queue full condition correctly,
+ * so we need to limit maxtags to what the device
+ * can handle instead of determining this automatically.
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, samsung, "WN321010S*", "*" },
+ /*quirks*/0, /*mintags*/2, /*maxtags*/32
+ },
+ {
+ /* Really only one LUN */
+ { T_ENCLOSURE, SIP_MEDIA_FIXED, "SUN", "SENA", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* I can't believe we need a quirk for DPT volumes. */
+ { T_ANY, SIP_MEDIA_FIXED|SIP_MEDIA_REMOVABLE, "DPT", "*", "*" },
+ CAM_QUIRK_NOLUNS,
+ /*mintags*/0, /*maxtags*/255
+ },
+ {
+ /*
+ * Many Sony CDROM drives don't like multi-LUN probing.
+ */
+ { T_CDROM, SIP_MEDIA_REMOVABLE, sony, "CD-ROM CDU*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * This drive doesn't like multiple LUN probing.
+ * Submitted by: Parag Patel <parag@cgt.com>
+ */
+ { T_WORM, SIP_MEDIA_REMOVABLE, sony, "CD-R CDU9*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ { T_WORM, SIP_MEDIA_REMOVABLE, "YAMAHA", "CDR100*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * The 8200 doesn't like multi-lun probing, and probably
+ * don't like serial number requests either.
+ */
+ {
+ T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "EXABYTE",
+ "EXB-8200*", "*"
+ },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Let's try the same as above, but for a drive that says
+ * it's an IPL-6860 but is actually an EXB 8200.
+ */
+ {
+ T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "EXABYTE",
+ "IPL-6860*", "*"
+ },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * These Hitachi drives don't like multi-lun probing.
+ * The PR submitter has a DK319H, but says that the Linux
+ * kernel has a similar work-around for the DK312 and DK314,
+ * so all DK31* drives are quirked here.
+ * PR: misc/18793
+ * Submitted by: Paul Haddad <paul@pth.com>
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "DK31*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/2, /*maxtags*/255
+ },
+ {
+ /*
+ * The Hitachi CJ series with J8A8 firmware apparantly has
+ * problems with tagged commands.
+ * PR: 23536
+ * Reported by: amagai@nue.org
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "DK32CJ*", "J8A8" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * These are the large storage arrays.
+ * Submitted by: William Carrel <william.carrel@infospace.com>
+ */
+ { T_DIRECT, SIP_MEDIA_FIXED, "HITACHI", "OPEN*", "*" },
+ CAM_QUIRK_HILUNS, 2, 1024
+ },
+ {
+ /*
+ * This old revision of the TDC3600 is also SCSI-1, and
+ * hangs upon serial number probing.
+ */
+ {
+ T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "TANDBERG",
+ " TDC 3600", "U07:"
+ },
+ CAM_QUIRK_NOSERIAL, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Would repond to all LUNs if asked for.
+ */
+ {
+ T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "CALIPER",
+ "CP150", "*"
+ },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /*
+ * Would repond to all LUNs if asked for.
+ */
+ {
+ T_SEQUENTIAL, SIP_MEDIA_REMOVABLE, "KENNEDY",
+ "96X2*", "*"
+ },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Submitted by: Matthew Dodd <winter@jurai.net> */
+ { T_PROCESSOR, SIP_MEDIA_FIXED, "Cabletrn", "EA41*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Submitted by: Matthew Dodd <winter@jurai.net> */
+ { T_PROCESSOR, SIP_MEDIA_FIXED, "CABLETRN", "EA41*", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* TeraSolutions special settings for TRC-22 RAID */
+ { T_DIRECT, SIP_MEDIA_FIXED, "TERASOLU", "TRC-22", "*" },
+ /*quirks*/0, /*mintags*/55, /*maxtags*/255
+ },
+ {
+ /* Veritas Storage Appliance */
+ { T_DIRECT, SIP_MEDIA_FIXED, "VERITAS", "*", "*" },
+ CAM_QUIRK_HILUNS, /*mintags*/2, /*maxtags*/1024
+ },
+ {
+ /*
+ * Would respond to all LUNs. Device type and removable
+ * flag are jumper-selectable.
+ */
+ { T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED, "MaxOptix",
+ "Tahiti 1", "*"
+ },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* EasyRAID E5A aka. areca ARC-6010 */
+ { T_DIRECT, SIP_MEDIA_FIXED, "easyRAID", "*", "*" },
+ CAM_QUIRK_NOHILUNS, /*mintags*/2, /*maxtags*/255
+ },
+ {
+ { T_ENCLOSURE, SIP_MEDIA_FIXED, "DP", "BACKPLANE", "*" },
+ CAM_QUIRK_NOLUNS, /*mintags*/0, /*maxtags*/0
+ },
+ {
+ /* Default tagged queuing parameters for all devices */
+ {
+ T_ANY, SIP_MEDIA_REMOVABLE|SIP_MEDIA_FIXED,
+ /*vendor*/"*", /*product*/"*", /*revision*/"*"
+ },
+ /*quirks*/0, /*mintags*/2, /*maxtags*/255
+ },
+};
+
+static const int scsi_quirk_table_size =
+ sizeof(scsi_quirk_table) / sizeof(*scsi_quirk_table);
+
+static cam_status proberegister(struct cam_periph *periph,
+ void *arg);
+static void probeschedule(struct cam_periph *probe_periph);
+static void probestart(struct cam_periph *periph, union ccb *start_ccb);
+static void proberequestdefaultnegotiation(struct cam_periph *periph);
+static int proberequestbackoff(struct cam_periph *periph,
+ struct cam_ed *device);
+static void probedone(struct cam_periph *periph, union ccb *done_ccb);
+static void probecleanup(struct cam_periph *periph);
+static void scsi_find_quirk(struct cam_ed *device);
+static void scsi_scan_bus(struct cam_periph *periph, union ccb *ccb);
+static void scsi_scan_lun(struct cam_periph *periph,
+ struct cam_path *path, cam_flags flags,
+ union ccb *ccb);
+static void xptscandone(struct cam_periph *periph, union ccb *done_ccb);
+static struct cam_ed *
+ scsi_alloc_device(struct cam_eb *bus, struct cam_et *target,
+ lun_id_t lun_id);
+static void scsi_devise_transport(struct cam_path *path);
+static void scsi_set_transfer_settings(struct ccb_trans_settings *cts,
+ struct cam_ed *device,
+ int async_update);
+static void scsi_toggle_tags(struct cam_path *path);
+static void scsi_dev_async(u_int32_t async_code,
+ struct cam_eb *bus,
+ struct cam_et *target,
+ struct cam_ed *device,
+ void *async_arg);
+static void scsi_action(union ccb *start_ccb);
+
+static struct xpt_xport scsi_xport = {
+ .alloc_device = scsi_alloc_device,
+ .action = scsi_action,
+ .async = scsi_dev_async,
+};
+
+struct xpt_xport *
+scsi_get_xport(void)
+{
+ return (&scsi_xport);
+}
+
+static void
+probe_periph_init()
+{
+}
+
+static cam_status
+proberegister(struct cam_periph *periph, void *arg)
+{
+ union ccb *request_ccb; /* CCB representing the probe request */
+ cam_status status;
+ probe_softc *softc;
+
+ request_ccb = (union ccb *)arg;
+ if (periph == NULL) {
+ printf("proberegister: periph was NULL!!\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ if (request_ccb == NULL) {
+ printf("proberegister: no probe CCB, "
+ "can't register device\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+
+ softc = (probe_softc *)malloc(sizeof(*softc), M_CAMXPT, M_NOWAIT);
+
+ if (softc == NULL) {
+ printf("proberegister: Unable to probe new device. "
+ "Unable to allocate softc\n");
+ return(CAM_REQ_CMP_ERR);
+ }
+ TAILQ_INIT(&softc->request_ccbs);
+ TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
+ periph_links.tqe);
+ softc->flags = 0;
+ periph->softc = softc;
+ softc->periph = periph;
+ softc->action = PROBE_INVALID;
+ status = cam_periph_acquire(periph);
+ if (status != CAM_REQ_CMP) {
+ return (status);
+ }
+
+
+ /*
+ * Ensure we've waited at least a bus settle
+ * delay before attempting to probe the device.
+ * For HBAs that don't do bus resets, this won't make a difference.
+ */
+ cam_periph_freeze_after_event(periph, &periph->path->bus->last_reset,
+ scsi_delay);
+ probeschedule(periph);
+ return(CAM_REQ_CMP);
+}
+
+static void
+probeschedule(struct cam_periph *periph)
+{
+ struct ccb_pathinq cpi;
+ union ccb *ccb;
+ probe_softc *softc;
+
+ softc = (probe_softc *)periph->softc;
+ ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
+
+ xpt_setup_ccb(&cpi.ccb_h, periph->path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ /*
+ * If a device has gone away and another device, or the same one,
+ * is back in the same place, it should have a unit attention
+ * condition pending. It will not report the unit attention in
+ * response to an inquiry, which may leave invalid transfer
+ * negotiations in effect. The TUR will reveal the unit attention
+ * condition. Only send the TUR for lun 0, since some devices
+ * will get confused by commands other than inquiry to non-existent
+ * luns. If you think a device has gone away start your scan from
+ * lun 0. This will insure that any bogus transfer settings are
+ * invalidated.
+ *
+ * If we haven't seen the device before and the controller supports
+ * some kind of transfer negotiation, negotiate with the first
+ * sent command if no bus reset was performed at startup. This
+ * ensures that the device is not confused by transfer negotiation
+ * settings left over by loader or BIOS action.
+ */
+ if (((ccb->ccb_h.path->device->flags & CAM_DEV_UNCONFIGURED) == 0)
+ && (ccb->ccb_h.target_lun == 0)) {
+ PROBE_SET_ACTION(softc, PROBE_TUR);
+ } else if ((cpi.hba_inquiry & (PI_WIDE_32|PI_WIDE_16|PI_SDTR_ABLE)) != 0
+ && (cpi.hba_misc & PIM_NOBUSRESET) != 0) {
+ proberequestdefaultnegotiation(periph);
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY);
+ } else {
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY);
+ }
+
+ if (ccb->crcn.flags & CAM_EXPECT_INQ_CHANGE)
+ softc->flags |= PROBE_NO_ANNOUNCE;
+ else
+ softc->flags &= ~PROBE_NO_ANNOUNCE;
+
+ xpt_schedule(periph, ccb->ccb_h.pinfo.priority);
+}
+
+static void
+probestart(struct cam_periph *periph, union ccb *start_ccb)
+{
+ /* Probe the device that our peripheral driver points to */
+ struct ccb_scsiio *csio;
+ probe_softc *softc;
+
+ CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probestart\n"));
+
+ softc = (probe_softc *)periph->softc;
+ csio = &start_ccb->csio;
+
+ switch (softc->action) {
+ case PROBE_TUR:
+ case PROBE_TUR_FOR_NEGOTIATION:
+ case PROBE_DV_EXIT:
+ {
+ scsi_test_unit_ready(csio,
+ /*retries*/10,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ SSD_FULL_SIZE,
+ /*timeout*/60000);
+ break;
+ }
+ case PROBE_INQUIRY:
+ case PROBE_FULL_INQUIRY:
+ case PROBE_INQUIRY_BASIC_DV1:
+ case PROBE_INQUIRY_BASIC_DV2:
+ {
+ u_int inquiry_len;
+ struct scsi_inquiry_data *inq_buf;
+
+ inq_buf = &periph->path->device->inq_data;
+
+ /*
+ * If the device is currently configured, we calculate an
+ * MD5 checksum of the inquiry data, and if the serial number
+ * length is greater than 0, add the serial number data
+ * into the checksum as well. Once the inquiry and the
+ * serial number check finish, we attempt to figure out
+ * whether we still have the same device.
+ */
+ if ((periph->path->device->flags & CAM_DEV_UNCONFIGURED) == 0) {
+
+ MD5Init(&softc->context);
+ MD5Update(&softc->context, (unsigned char *)inq_buf,
+ sizeof(struct scsi_inquiry_data));
+ softc->flags |= PROBE_INQUIRY_CKSUM;
+ if (periph->path->device->serial_num_len > 0) {
+ MD5Update(&softc->context,
+ periph->path->device->serial_num,
+ periph->path->device->serial_num_len);
+ softc->flags |= PROBE_SERIAL_CKSUM;
+ }
+ MD5Final(softc->digest, &softc->context);
+ }
+
+ if (softc->action == PROBE_INQUIRY)
+ inquiry_len = SHORT_INQUIRY_LENGTH;
+ else
+ inquiry_len = SID_ADDITIONAL_LENGTH(inq_buf);
+
+ /*
+ * Some parallel SCSI devices fail to send an
+ * ignore wide residue message when dealing with
+ * odd length inquiry requests. Round up to be
+ * safe.
+ */
+ inquiry_len = roundup2(inquiry_len, 2);
+
+ if (softc->action == PROBE_INQUIRY_BASIC_DV1
+ || softc->action == PROBE_INQUIRY_BASIC_DV2) {
+ inq_buf = malloc(inquiry_len, M_CAMXPT, M_NOWAIT);
+ }
+ if (inq_buf == NULL) {
+ xpt_print(periph->path, "malloc failure- skipping Basic"
+ "Domain Validation\n");
+ PROBE_SET_ACTION(softc, PROBE_DV_EXIT);
+ scsi_test_unit_ready(csio,
+ /*retries*/4,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ SSD_FULL_SIZE,
+ /*timeout*/60000);
+ break;
+ }
+ scsi_inquiry(csio,
+ /*retries*/4,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ (u_int8_t *)inq_buf,
+ inquiry_len,
+ /*evpd*/FALSE,
+ /*page_code*/0,
+ SSD_MIN_SIZE,
+ /*timeout*/60 * 1000);
+ break;
+ }
+ case PROBE_MODE_SENSE:
+ {
+ void *mode_buf;
+ int mode_buf_len;
+
+ mode_buf_len = sizeof(struct scsi_mode_header_6)
+ + sizeof(struct scsi_mode_blk_desc)
+ + sizeof(struct scsi_control_page);
+ mode_buf = malloc(mode_buf_len, M_CAMXPT, M_NOWAIT);
+ if (mode_buf != NULL) {
+ scsi_mode_sense(csio,
+ /*retries*/4,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ /*dbd*/FALSE,
+ SMS_PAGE_CTRL_CURRENT,
+ SMS_CONTROL_MODE_PAGE,
+ mode_buf,
+ mode_buf_len,
+ SSD_FULL_SIZE,
+ /*timeout*/60000);
+ break;
+ }
+ xpt_print(periph->path, "Unable to mode sense control page - "
+ "malloc failure\n");
+ PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
+ }
+ /* FALLTHROUGH */
+ case PROBE_SERIAL_NUM_0:
+ {
+ struct scsi_vpd_supported_page_list *vpd_list = NULL;
+ struct cam_ed *device;
+
+ device = periph->path->device;
+ if ((SCSI_QUIRK(device)->quirks & CAM_QUIRK_NOSERIAL) == 0) {
+ vpd_list = malloc(sizeof(*vpd_list), M_CAMXPT,
+ M_NOWAIT | M_ZERO);
+ }
+
+ if (vpd_list != NULL) {
+ scsi_inquiry(csio,
+ /*retries*/4,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ (u_int8_t *)vpd_list,
+ sizeof(*vpd_list),
+ /*evpd*/TRUE,
+ SVPD_SUPPORTED_PAGE_LIST,
+ SSD_MIN_SIZE,
+ /*timeout*/60 * 1000);
+ break;
+ }
+ /*
+ * We'll have to do without, let our probedone
+ * routine finish up for us.
+ */
+ start_ccb->csio.data_ptr = NULL;
+ probedone(periph, start_ccb);
+ return;
+ }
+ case PROBE_SERIAL_NUM_1:
+ {
+ struct scsi_vpd_unit_serial_number *serial_buf;
+ struct cam_ed* device;
+
+ serial_buf = NULL;
+ device = periph->path->device;
+ device->serial_num = NULL;
+ device->serial_num_len = 0;
+
+ serial_buf = (struct scsi_vpd_unit_serial_number *)
+ malloc(sizeof(*serial_buf), M_CAMXPT, M_NOWAIT|M_ZERO);
+
+ if (serial_buf != NULL) {
+ scsi_inquiry(csio,
+ /*retries*/4,
+ probedone,
+ MSG_SIMPLE_Q_TAG,
+ (u_int8_t *)serial_buf,
+ sizeof(*serial_buf),
+ /*evpd*/TRUE,
+ SVPD_UNIT_SERIAL_NUMBER,
+ SSD_MIN_SIZE,
+ /*timeout*/60 * 1000);
+ break;
+ }
+ /*
+ * We'll have to do without, let our probedone
+ * routine finish up for us.
+ */
+ start_ccb->csio.data_ptr = NULL;
+ probedone(periph, start_ccb);
+ return;
+ }
+ case PROBE_INVALID:
+ CAM_DEBUG(start_ccb->ccb_h.path, CAM_DEBUG_INFO,
+ ("probestart: invalid action state\n"));
+ default:
+ break;
+ }
+ xpt_action(start_ccb);
+}
+
+static void
+proberequestdefaultnegotiation(struct cam_periph *periph)
+{
+ struct ccb_trans_settings cts;
+
+ xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_USER_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ return;
+ }
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+}
+
+/*
+ * Backoff Negotiation Code- only pertinent for SPI devices.
+ */
+static int
+proberequestbackoff(struct cam_periph *periph, struct cam_ed *device)
+{
+ struct ccb_trans_settings cts;
+ struct ccb_trans_settings_spi *spi;
+
+ memset(&cts, 0, sizeof (cts));
+ xpt_setup_ccb(&cts.ccb_h, periph->path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ if (bootverbose) {
+ xpt_print(periph->path,
+ "failed to get current device settings\n");
+ }
+ return (0);
+ }
+ if (cts.transport != XPORT_SPI) {
+ if (bootverbose) {
+ xpt_print(periph->path, "not SPI transport\n");
+ }
+ return (0);
+ }
+ spi = &cts.xport_specific.spi;
+
+ /*
+ * We cannot renegotiate sync rate if we don't have one.
+ */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0) {
+ if (bootverbose) {
+ xpt_print(periph->path, "no sync rate known\n");
+ }
+ return (0);
+ }
+
+ /*
+ * We'll assert that we don't have to touch PPR options- the
+ * SIM will see what we do with period and offset and adjust
+ * the PPR options as appropriate.
+ */
+
+ /*
+ * A sync rate with unknown or zero offset is nonsensical.
+ * A sync period of zero means Async.
+ */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0
+ || spi->sync_offset == 0 || spi->sync_period == 0) {
+ if (bootverbose) {
+ xpt_print(periph->path, "no sync rate available\n");
+ }
+ return (0);
+ }
+
+ if (device->flags & CAM_DEV_DV_HIT_BOTTOM) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("hit async: giving up on DV\n"));
+ return (0);
+ }
+
+
+ /*
+ * Jump sync_period up by one, but stop at 5MHz and fall back to Async.
+ * We don't try to remember 'last' settings to see if the SIM actually
+ * gets into the speed we want to set. We check on the SIM telling
+ * us that a requested speed is bad, but otherwise don't try and
+ * check the speed due to the asynchronous and handshake nature
+ * of speed setting.
+ */
+ spi->valid = CTS_SPI_VALID_SYNC_RATE | CTS_SPI_VALID_SYNC_OFFSET;
+ for (;;) {
+ spi->sync_period++;
+ if (spi->sync_period >= 0xf) {
+ spi->sync_period = 0;
+ spi->sync_offset = 0;
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("setting to async for DV\n"));
+ /*
+ * Once we hit async, we don't want to try
+ * any more settings.
+ */
+ device->flags |= CAM_DEV_DV_HIT_BOTTOM;
+ } else if (bootverbose) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("DV: period 0x%x\n", spi->sync_period));
+ printf("setting period to 0x%x\n", spi->sync_period);
+ }
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ xpt_action((union ccb *)&cts);
+ if ((cts.ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ break;
+ }
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("DV: failed to set period 0x%x\n", spi->sync_period));
+ if (spi->sync_period == 0) {
+ return (0);
+ }
+ }
+ return (1);
+}
+
+static void
+probedone(struct cam_periph *periph, union ccb *done_ccb)
+{
+ probe_softc *softc;
+ struct cam_path *path;
+ u_int32_t priority;
+
+ CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_TRACE, ("probedone\n"));
+
+ softc = (probe_softc *)periph->softc;
+ path = done_ccb->ccb_h.path;
+ priority = done_ccb->ccb_h.pinfo.priority;
+
+ switch (softc->action) {
+ case PROBE_TUR:
+ {
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+
+ if (cam_periph_error(done_ccb, 0,
+ SF_NO_PRINT, NULL) == ERESTART)
+ return;
+ else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0)
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path,
+ /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ case PROBE_INQUIRY:
+ case PROBE_FULL_INQUIRY:
+ {
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ struct scsi_inquiry_data *inq_buf;
+ u_int8_t periph_qual;
+
+ path->device->flags |= CAM_DEV_INQUIRY_DATA_VALID;
+ inq_buf = &path->device->inq_data;
+
+ periph_qual = SID_QUAL(inq_buf);
+
+ switch(periph_qual) {
+ case SID_QUAL_LU_CONNECTED:
+ {
+ u_int8_t len;
+
+ /*
+ * We conservatively request only
+ * SHORT_INQUIRY_LEN bytes of inquiry
+ * information during our first try
+ * at sending an INQUIRY. If the device
+ * has more information to give,
+ * perform a second request specifying
+ * the amount of information the device
+ * is willing to give.
+ */
+ len = inq_buf->additional_length
+ + offsetof(struct scsi_inquiry_data,
+ additional_length) + 1;
+ if (softc->action == PROBE_INQUIRY
+ && len > SHORT_INQUIRY_LENGTH) {
+ PROBE_SET_ACTION(softc, PROBE_FULL_INQUIRY);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+
+ scsi_find_quirk(path->device);
+
+ scsi_devise_transport(path);
+ if (INQ_DATA_TQ_ENABLED(inq_buf))
+ PROBE_SET_ACTION(softc, PROBE_MODE_SENSE);
+ else
+ PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
+
+ path->device->flags &= ~CAM_DEV_UNCONFIGURED;
+
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ default:
+ break;
+ }
+ } else if (cam_periph_error(done_ccb, 0,
+ done_ccb->ccb_h.target_lun > 0
+ ? SF_RETRY_UA|SF_QUIET_IR
+ : SF_RETRY_UA,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ /*
+ * If we get to this point, we got an error status back
+ * from the inquiry and the error status doesn't require
+ * automatically retrying the command. Therefore, the
+ * inquiry failed. If we had inquiry information before
+ * for this device, but this latest inquiry command failed,
+ * the device has probably gone away. If this device isn't
+ * already marked unconfigured, notify the peripheral
+ * drivers that this device is no more.
+ */
+ if ((path->device->flags & CAM_DEV_UNCONFIGURED) == 0)
+ /* Send the async notification. */
+ xpt_async(AC_LOST_DEVICE, path, NULL);
+
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ case PROBE_MODE_SENSE:
+ {
+ struct ccb_scsiio *csio;
+ struct scsi_mode_header_6 *mode_hdr;
+
+ csio = &done_ccb->csio;
+ mode_hdr = (struct scsi_mode_header_6 *)csio->data_ptr;
+ if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP) {
+ struct scsi_control_page *page;
+ u_int8_t *offset;
+
+ offset = ((u_int8_t *)&mode_hdr[1])
+ + mode_hdr->blk_desc_len;
+ page = (struct scsi_control_page *)offset;
+ path->device->queue_flags = page->queue_flags;
+ } else if (cam_periph_error(done_ccb, 0,
+ SF_RETRY_UA|SF_NO_PRINT,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path,
+ /*count*/1, /*run_queue*/TRUE);
+ }
+ xpt_release_ccb(done_ccb);
+ free(mode_hdr, M_CAMXPT);
+ PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_0);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ case PROBE_SERIAL_NUM_0:
+ {
+ struct ccb_scsiio *csio;
+ struct scsi_vpd_supported_page_list *page_list;
+ int length, serialnum_supported, i;
+
+ serialnum_supported = 0;
+ csio = &done_ccb->csio;
+ page_list =
+ (struct scsi_vpd_supported_page_list *)csio->data_ptr;
+
+ if (page_list == NULL) {
+ /*
+ * Don't process the command as it was never sent
+ */
+ } else if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP
+ && (page_list->length > 0)) {
+ length = min(page_list->length,
+ SVPD_SUPPORTED_PAGES_SIZE);
+ for (i = 0; i < length; i++) {
+ if (page_list->list[i] ==
+ SVPD_UNIT_SERIAL_NUMBER) {
+ serialnum_supported = 1;
+ break;
+ }
+ }
+ } else if (cam_periph_error(done_ccb, 0,
+ SF_RETRY_UA|SF_NO_PRINT,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+
+ if (page_list != NULL)
+ free(page_list, M_DEVBUF);
+
+ if (serialnum_supported) {
+ xpt_release_ccb(done_ccb);
+ PROBE_SET_ACTION(softc, PROBE_SERIAL_NUM_1);
+ xpt_schedule(periph, priority);
+ return;
+ }
+
+ csio->data_ptr = NULL;
+ /* FALLTHROUGH */
+ }
+
+ case PROBE_SERIAL_NUM_1:
+ {
+ struct ccb_scsiio *csio;
+ struct scsi_vpd_unit_serial_number *serial_buf;
+ u_int32_t priority;
+ int changed;
+ int have_serialnum;
+
+ changed = 1;
+ have_serialnum = 0;
+ csio = &done_ccb->csio;
+ priority = done_ccb->ccb_h.pinfo.priority;
+ serial_buf =
+ (struct scsi_vpd_unit_serial_number *)csio->data_ptr;
+
+ /* Clean up from previous instance of this device */
+ if (path->device->serial_num != NULL) {
+ free(path->device->serial_num, M_CAMXPT);
+ path->device->serial_num = NULL;
+ path->device->serial_num_len = 0;
+ }
+
+ if (serial_buf == NULL) {
+ /*
+ * Don't process the command as it was never sent
+ */
+ } else if ((csio->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP
+ && (serial_buf->length > 0)) {
+
+ have_serialnum = 1;
+ path->device->serial_num =
+ (u_int8_t *)malloc((serial_buf->length + 1),
+ M_CAMXPT, M_NOWAIT);
+ if (path->device->serial_num != NULL) {
+ bcopy(serial_buf->serial_num,
+ path->device->serial_num,
+ serial_buf->length);
+ path->device->serial_num_len =
+ serial_buf->length;
+ path->device->serial_num[serial_buf->length]
+ = '\0';
+ }
+ } else if (cam_periph_error(done_ccb, 0,
+ SF_RETRY_UA|SF_NO_PRINT,
+ &softc->saved_ccb) == ERESTART) {
+ return;
+ } else if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+
+ /*
+ * Let's see if we have seen this device before.
+ */
+ if ((softc->flags & PROBE_INQUIRY_CKSUM) != 0) {
+ MD5_CTX context;
+ u_int8_t digest[16];
+
+ MD5Init(&context);
+
+ MD5Update(&context,
+ (unsigned char *)&path->device->inq_data,
+ sizeof(struct scsi_inquiry_data));
+
+ if (have_serialnum)
+ MD5Update(&context, serial_buf->serial_num,
+ serial_buf->length);
+
+ MD5Final(digest, &context);
+ if (bcmp(softc->digest, digest, 16) == 0)
+ changed = 0;
+
+ /*
+ * XXX Do we need to do a TUR in order to ensure
+ * that the device really hasn't changed???
+ */
+ if ((changed != 0)
+ && ((softc->flags & PROBE_NO_ANNOUNCE) == 0))
+ xpt_async(AC_LOST_DEVICE, path, NULL);
+ }
+ if (serial_buf != NULL)
+ free(serial_buf, M_CAMXPT);
+
+ if (changed != 0) {
+ /*
+ * Now that we have all the necessary
+ * information to safely perform transfer
+ * negotiations... Controllers don't perform
+ * any negotiation or tagged queuing until
+ * after the first XPT_SET_TRAN_SETTINGS ccb is
+ * received. So, on a new device, just retrieve
+ * the user settings, and set them as the current
+ * settings to set the device up.
+ */
+ proberequestdefaultnegotiation(periph);
+ xpt_release_ccb(done_ccb);
+
+ /*
+ * Perform a TUR to allow the controller to
+ * perform any necessary transfer negotiation.
+ */
+ PROBE_SET_ACTION(softc, PROBE_TUR_FOR_NEGOTIATION);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ case PROBE_TUR_FOR_NEGOTIATION:
+ if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ DELAY(500000);
+ if (cam_periph_error(done_ccb, 0, SF_RETRY_UA,
+ NULL) == ERESTART)
+ return;
+ }
+ /* FALLTHROUGH */
+ case PROBE_DV_EXIT:
+ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ /*
+ * Do Domain Validation for lun 0 on devices that claim
+ * to support Synchronous Transfer modes.
+ */
+ if (softc->action == PROBE_TUR_FOR_NEGOTIATION
+ && done_ccb->ccb_h.target_lun == 0
+ && (path->device->inq_data.flags & SID_Sync) != 0
+ && (path->device->flags & CAM_DEV_IN_DV) == 0) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("Begin Domain Validation\n"));
+ path->device->flags |= CAM_DEV_IN_DV;
+ xpt_release_ccb(done_ccb);
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY_BASIC_DV1);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ if (softc->action == PROBE_DV_EXIT) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("Leave Domain Validation\n"));
+ }
+ path->device->flags &=
+ ~(CAM_DEV_UNCONFIGURED|CAM_DEV_IN_DV|CAM_DEV_DV_HIT_BOTTOM);
+ if ((softc->flags & PROBE_NO_ANNOUNCE) == 0) {
+ /* Inform the XPT that a new device has been found */
+ done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+ xpt_action(done_ccb);
+ xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
+ done_ccb);
+ }
+ xpt_release_ccb(done_ccb);
+ break;
+ case PROBE_INQUIRY_BASIC_DV1:
+ case PROBE_INQUIRY_BASIC_DV2:
+ {
+ struct scsi_inquiry_data *nbuf;
+ struct ccb_scsiio *csio;
+
+ if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) {
+ /* Don't wedge the queue */
+ xpt_release_devq(done_ccb->ccb_h.path, /*count*/1,
+ /*run_queue*/TRUE);
+ }
+ csio = &done_ccb->csio;
+ nbuf = (struct scsi_inquiry_data *)csio->data_ptr;
+ if (bcmp(nbuf, &path->device->inq_data, SHORT_INQUIRY_LENGTH)) {
+ xpt_print(path,
+ "inquiry data fails comparison at DV%d step\n",
+ softc->action == PROBE_INQUIRY_BASIC_DV1 ? 1 : 2);
+ if (proberequestbackoff(periph, path->device)) {
+ path->device->flags &= ~CAM_DEV_IN_DV;
+ PROBE_SET_ACTION(softc, PROBE_TUR_FOR_NEGOTIATION);
+ } else {
+ /* give up */
+ PROBE_SET_ACTION(softc, PROBE_DV_EXIT);
+ }
+ free(nbuf, M_CAMXPT);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ free(nbuf, M_CAMXPT);
+ if (softc->action == PROBE_INQUIRY_BASIC_DV1) {
+ PROBE_SET_ACTION(softc, PROBE_INQUIRY_BASIC_DV2);
+ xpt_release_ccb(done_ccb);
+ xpt_schedule(periph, priority);
+ return;
+ }
+ if (softc->action == PROBE_INQUIRY_BASIC_DV2) {
+ CAM_DEBUG(periph->path, CAM_DEBUG_INFO,
+ ("Leave Domain Validation Successfully\n"));
+ }
+ path->device->flags &=
+ ~(CAM_DEV_UNCONFIGURED|CAM_DEV_IN_DV|CAM_DEV_DV_HIT_BOTTOM);
+ if ((softc->flags & PROBE_NO_ANNOUNCE) == 0) {
+ /* Inform the XPT that a new device has been found */
+ done_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+ xpt_action(done_ccb);
+ xpt_async(AC_FOUND_DEVICE, done_ccb->ccb_h.path,
+ done_ccb);
+ }
+ xpt_release_ccb(done_ccb);
+ break;
+ }
+ case PROBE_INVALID:
+ CAM_DEBUG(done_ccb->ccb_h.path, CAM_DEBUG_INFO,
+ ("probedone: invalid action state\n"));
+ default:
+ break;
+ }
+ done_ccb = (union ccb *)TAILQ_FIRST(&softc->request_ccbs);
+ TAILQ_REMOVE(&softc->request_ccbs, &done_ccb->ccb_h, periph_links.tqe);
+ done_ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(done_ccb);
+ if (TAILQ_FIRST(&softc->request_ccbs) == NULL) {
+ cam_periph_invalidate(periph);
+ cam_periph_release_locked(periph);
+ } else {
+ probeschedule(periph);
+ }
+}
+
+static void
+probecleanup(struct cam_periph *periph)
+{
+ free(periph->softc, M_CAMXPT);
+}
+
+static void
+scsi_find_quirk(struct cam_ed *device)
+{
+ struct scsi_quirk_entry *quirk;
+ caddr_t match;
+
+ match = cam_quirkmatch((caddr_t)&device->inq_data,
+ (caddr_t)scsi_quirk_table,
+ sizeof(scsi_quirk_table) /
+ sizeof(*scsi_quirk_table),
+ sizeof(*scsi_quirk_table), scsi_inquiry_match);
+
+ if (match == NULL)
+ panic("xpt_find_quirk: device didn't match wildcard entry!!");
+
+ quirk = (struct scsi_quirk_entry *)match;
+ device->quirk = quirk;
+ device->mintags = quirk->mintags;
+ device->maxtags = quirk->maxtags;
+}
+
+static int
+sysctl_cam_search_luns(SYSCTL_HANDLER_ARGS)
+{
+ int error, bool;
+
+ bool = cam_srch_hi;
+ error = sysctl_handle_int(oidp, &bool, 0, req);
+ if (error != 0 || req->newptr == NULL)
+ return (error);
+ if (bool == 0 || bool == 1) {
+ cam_srch_hi = bool;
+ return (0);
+ } else {
+ return (EINVAL);
+ }
+}
+
+typedef struct {
+ union ccb *request_ccb;
+ struct ccb_pathinq *cpi;
+ int counter;
+} scsi_scan_bus_info;
+
+/*
+ * To start a scan, request_ccb is an XPT_SCAN_BUS ccb.
+ * As the scan progresses, xpt_scan_bus is used as the
+ * callback on completion function.
+ */
+static void
+scsi_scan_bus(struct cam_periph *periph, union ccb *request_ccb)
+{
+ CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
+ ("xpt_scan_bus\n"));
+ switch (request_ccb->ccb_h.func_code) {
+ case XPT_SCAN_BUS:
+ {
+ scsi_scan_bus_info *scan_info;
+ union ccb *work_ccb;
+ struct cam_path *path;
+ u_int i;
+ u_int max_target;
+ u_int initiator_id;
+
+ /* Find out the characteristics of the bus */
+ work_ccb = xpt_alloc_ccb_nowait();
+ if (work_ccb == NULL) {
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ return;
+ }
+ xpt_setup_ccb(&work_ccb->ccb_h, request_ccb->ccb_h.path,
+ request_ccb->ccb_h.pinfo.priority);
+ work_ccb->ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action(work_ccb);
+ if (work_ccb->ccb_h.status != CAM_REQ_CMP) {
+ request_ccb->ccb_h.status = work_ccb->ccb_h.status;
+ xpt_free_ccb(work_ccb);
+ xpt_done(request_ccb);
+ return;
+ }
+
+ if ((work_ccb->cpi.hba_misc & PIM_NOINITIATOR) != 0) {
+ /*
+ * Can't scan the bus on an adapter that
+ * cannot perform the initiator role.
+ */
+ request_ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_free_ccb(work_ccb);
+ xpt_done(request_ccb);
+ return;
+ }
+
+ /* Save some state for use while we probe for devices */
+ scan_info = (scsi_scan_bus_info *)
+ malloc(sizeof(scsi_scan_bus_info), M_CAMXPT, M_NOWAIT);
+ if (scan_info == NULL) {
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ return;
+ }
+ scan_info->request_ccb = request_ccb;
+ scan_info->cpi = &work_ccb->cpi;
+
+ /* Cache on our stack so we can work asynchronously */
+ max_target = scan_info->cpi->max_target;
+ initiator_id = scan_info->cpi->initiator_id;
+
+
+ /*
+ * We can scan all targets in parallel, or do it sequentially.
+ */
+ if (scan_info->cpi->hba_misc & PIM_SEQSCAN) {
+ max_target = 0;
+ scan_info->counter = 0;
+ } else {
+ scan_info->counter = scan_info->cpi->max_target + 1;
+ if (scan_info->cpi->initiator_id < scan_info->counter) {
+ scan_info->counter--;
+ }
+ }
+
+ for (i = 0; i <= max_target; i++) {
+ cam_status status;
+ if (i == initiator_id)
+ continue;
+
+ status = xpt_create_path(&path, xpt_periph,
+ request_ccb->ccb_h.path_id,
+ i, 0);
+ if (status != CAM_REQ_CMP) {
+ printf("xpt_scan_bus: xpt_create_path failed"
+ " with status %#x, bus scan halted\n",
+ status);
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = status;
+ xpt_free_ccb(work_ccb);
+ xpt_done(request_ccb);
+ break;
+ }
+ work_ccb = xpt_alloc_ccb_nowait();
+ if (work_ccb == NULL) {
+ free(scan_info, M_CAMXPT);
+ xpt_free_path(path);
+ request_ccb->ccb_h.status = CAM_RESRC_UNAVAIL;
+ xpt_done(request_ccb);
+ break;
+ }
+ xpt_setup_ccb(&work_ccb->ccb_h, path,
+ request_ccb->ccb_h.pinfo.priority);
+ work_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ work_ccb->ccb_h.cbfcnp = scsi_scan_bus;
+ work_ccb->ccb_h.ppriv_ptr0 = scan_info;
+ work_ccb->crcn.flags = request_ccb->crcn.flags;
+ xpt_action(work_ccb);
+ }
+ break;
+ }
+ case XPT_SCAN_LUN:
+ {
+ cam_status status;
+ struct cam_path *path;
+ scsi_scan_bus_info *scan_info;
+ path_id_t path_id;
+ target_id_t target_id;
+ lun_id_t lun_id;
+
+ /* Reuse the same CCB to query if a device was really found */
+ scan_info = (scsi_scan_bus_info *)request_ccb->ccb_h.ppriv_ptr0;
+ xpt_setup_ccb(&request_ccb->ccb_h, request_ccb->ccb_h.path,
+ request_ccb->ccb_h.pinfo.priority);
+ request_ccb->ccb_h.func_code = XPT_GDEV_TYPE;
+
+ path_id = request_ccb->ccb_h.path_id;
+ target_id = request_ccb->ccb_h.target_id;
+ lun_id = request_ccb->ccb_h.target_lun;
+ xpt_action(request_ccb);
+
+ if (request_ccb->ccb_h.status != CAM_REQ_CMP) {
+ struct cam_ed *device;
+ struct cam_et *target;
+ int phl;
+
+ /*
+ * If we already probed lun 0 successfully, or
+ * we have additional configured luns on this
+ * target that might have "gone away", go onto
+ * the next lun.
+ */
+ target = request_ccb->ccb_h.path->target;
+ /*
+ * We may touch devices that we don't
+ * hold references too, so ensure they
+ * don't disappear out from under us.
+ * The target above is referenced by the
+ * path in the request ccb.
+ */
+ phl = 0;
+ device = TAILQ_FIRST(&target->ed_entries);
+ if (device != NULL) {
+ phl = CAN_SRCH_HI_SPARSE(device);
+ if (device->lun_id == 0)
+ device = TAILQ_NEXT(device, links);
+ }
+ if ((lun_id != 0) || (device != NULL)) {
+ if (lun_id < (CAM_SCSI2_MAXLUN-1) || phl)
+ lun_id++;
+ }
+ } else {
+ struct cam_ed *device;
+
+ device = request_ccb->ccb_h.path->device;
+
+ if ((SCSI_QUIRK(device)->quirks &
+ CAM_QUIRK_NOLUNS) == 0) {
+ /* Try the next lun */
+ if (lun_id < (CAM_SCSI2_MAXLUN-1)
+ || CAN_SRCH_HI_DENSE(device))
+ lun_id++;
+ }
+ }
+
+ /*
+ * Free the current request path- we're done with it.
+ */
+ xpt_free_path(request_ccb->ccb_h.path);
+
+ /*
+ * Check to see if we scan any further luns.
+ */
+ if (lun_id == request_ccb->ccb_h.target_lun
+ || lun_id > scan_info->cpi->max_lun) {
+ int done;
+
+ hop_again:
+ done = 0;
+ if (scan_info->cpi->hba_misc & PIM_SEQSCAN) {
+ scan_info->counter++;
+ if (scan_info->counter ==
+ scan_info->cpi->initiator_id) {
+ scan_info->counter++;
+ }
+ if (scan_info->counter >=
+ scan_info->cpi->max_target+1) {
+ done = 1;
+ }
+ } else {
+ scan_info->counter--;
+ if (scan_info->counter == 0) {
+ done = 1;
+ }
+ }
+ if (done) {
+ xpt_free_ccb(request_ccb);
+ xpt_free_ccb((union ccb *)scan_info->cpi);
+ request_ccb = scan_info->request_ccb;
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(request_ccb);
+ break;
+ }
+
+ if ((scan_info->cpi->hba_misc & PIM_SEQSCAN) == 0) {
+ break;
+ }
+ status = xpt_create_path(&path, xpt_periph,
+ scan_info->request_ccb->ccb_h.path_id,
+ scan_info->counter, 0);
+ if (status != CAM_REQ_CMP) {
+ printf("xpt_scan_bus: xpt_create_path failed"
+ " with status %#x, bus scan halted\n",
+ status);
+ xpt_free_ccb(request_ccb);
+ xpt_free_ccb((union ccb *)scan_info->cpi);
+ request_ccb = scan_info->request_ccb;
+ free(scan_info, M_CAMXPT);
+ request_ccb->ccb_h.status = status;
+ xpt_done(request_ccb);
+ break;
+ }
+ xpt_setup_ccb(&request_ccb->ccb_h, path,
+ request_ccb->ccb_h.pinfo.priority);
+ request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ request_ccb->ccb_h.cbfcnp = scsi_scan_bus;
+ request_ccb->ccb_h.ppriv_ptr0 = scan_info;
+ request_ccb->crcn.flags =
+ scan_info->request_ccb->crcn.flags;
+ } else {
+ status = xpt_create_path(&path, xpt_periph,
+ path_id, target_id, lun_id);
+ if (status != CAM_REQ_CMP) {
+ printf("xpt_scan_bus: xpt_create_path failed "
+ "with status %#x, halting LUN scan\n",
+ status);
+ goto hop_again;
+ }
+ xpt_setup_ccb(&request_ccb->ccb_h, path,
+ request_ccb->ccb_h.pinfo.priority);
+ request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ request_ccb->ccb_h.cbfcnp = scsi_scan_bus;
+ request_ccb->ccb_h.ppriv_ptr0 = scan_info;
+ request_ccb->crcn.flags =
+ scan_info->request_ccb->crcn.flags;
+ }
+ xpt_action(request_ccb);
+ break;
+ }
+ default:
+ break;
+ }
+}
+
+static void
+scsi_scan_lun(struct cam_periph *periph, struct cam_path *path,
+ cam_flags flags, union ccb *request_ccb)
+{
+ struct ccb_pathinq cpi;
+ cam_status status;
+ struct cam_path *new_path;
+ struct cam_periph *old_periph;
+
+ CAM_DEBUG(request_ccb->ccb_h.path, CAM_DEBUG_TRACE,
+ ("xpt_scan_lun\n"));
+
+ xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ if (cpi.ccb_h.status != CAM_REQ_CMP) {
+ if (request_ccb != NULL) {
+ request_ccb->ccb_h.status = cpi.ccb_h.status;
+ xpt_done(request_ccb);
+ }
+ return;
+ }
+
+ if ((cpi.hba_misc & PIM_NOINITIATOR) != 0) {
+ /*
+ * Can't scan the bus on an adapter that
+ * cannot perform the initiator role.
+ */
+ if (request_ccb != NULL) {
+ request_ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(request_ccb);
+ }
+ return;
+ }
+
+ if (request_ccb == NULL) {
+ request_ccb = malloc(sizeof(union ccb), M_CAMXPT, M_NOWAIT);
+ if (request_ccb == NULL) {
+ xpt_print(path, "xpt_scan_lun: can't allocate CCB, "
+ "can't continue\n");
+ return;
+ }
+ new_path = malloc(sizeof(*new_path), M_CAMXPT, M_NOWAIT);
+ if (new_path == NULL) {
+ xpt_print(path, "xpt_scan_lun: can't allocate path, "
+ "can't continue\n");
+ free(request_ccb, M_CAMXPT);
+ return;
+ }
+ status = xpt_compile_path(new_path, xpt_periph,
+ path->bus->path_id,
+ path->target->target_id,
+ path->device->lun_id);
+
+ if (status != CAM_REQ_CMP) {
+ xpt_print(path, "xpt_scan_lun: can't compile path, "
+ "can't continue\n");
+ free(request_ccb, M_CAMXPT);
+ free(new_path, M_CAMXPT);
+ return;
+ }
+ xpt_setup_ccb(&request_ccb->ccb_h, new_path, /*priority*/ 1);
+ request_ccb->ccb_h.cbfcnp = xptscandone;
+ request_ccb->ccb_h.func_code = XPT_SCAN_LUN;
+ request_ccb->crcn.flags = flags;
+ }
+
+ if ((old_periph = cam_periph_find(path, "probe")) != NULL) {
+ probe_softc *softc;
+
+ softc = (probe_softc *)old_periph->softc;
+ TAILQ_INSERT_TAIL(&softc->request_ccbs, &request_ccb->ccb_h,
+ periph_links.tqe);
+ } else {
+ status = cam_periph_alloc(proberegister, NULL, probecleanup,
+ probestart, "probe",
+ CAM_PERIPH_BIO,
+ request_ccb->ccb_h.path, NULL, 0,
+ request_ccb);
+
+ if (status != CAM_REQ_CMP) {
+ xpt_print(path, "xpt_scan_lun: cam_alloc_periph "
+ "returned an error, can't continue probe\n");
+ request_ccb->ccb_h.status = status;
+ xpt_done(request_ccb);
+ }
+ }
+}
+
+static void
+xptscandone(struct cam_periph *periph, union ccb *done_ccb)
+{
+ xpt_release_path(done_ccb->ccb_h.path);
+ free(done_ccb->ccb_h.path, M_CAMXPT);
+ free(done_ccb, M_CAMXPT);
+}
+
+static struct cam_ed *
+scsi_alloc_device(struct cam_eb *bus, struct cam_et *target, lun_id_t lun_id)
+{
+ struct cam_path path;
+ struct scsi_quirk_entry *quirk;
+ struct cam_ed *device;
+ struct cam_ed *cur_device;
+
+ device = xpt_alloc_device(bus, target, lun_id);
+ if (device == NULL)
+ return (NULL);
+
+ /*
+ * Take the default quirk entry until we have inquiry
+ * data and can determine a better quirk to use.
+ */
+ quirk = &scsi_quirk_table[scsi_quirk_table_size - 1];
+ device->quirk = (void *)quirk;
+ device->mintags = quirk->mintags;
+ device->maxtags = quirk->maxtags;
+ bzero(&device->inq_data, sizeof(device->inq_data));
+ device->inq_flags = 0;
+ device->queue_flags = 0;
+ device->serial_num = NULL;
+ device->serial_num_len = 0;
+
+ /*
+ * XXX should be limited by number of CCBs this bus can
+ * do.
+ */
+ bus->sim->max_ccbs += device->ccbq.devq_openings;
+ /* Insertion sort into our target's device list */
+ cur_device = TAILQ_FIRST(&target->ed_entries);
+ while (cur_device != NULL && cur_device->lun_id < lun_id)
+ cur_device = TAILQ_NEXT(cur_device, links);
+ if (cur_device != NULL) {
+ TAILQ_INSERT_BEFORE(cur_device, device, links);
+ } else {
+ TAILQ_INSERT_TAIL(&target->ed_entries, device, links);
+ }
+ target->generation++;
+ if (lun_id != CAM_LUN_WILDCARD) {
+ xpt_compile_path(&path,
+ NULL,
+ bus->path_id,
+ target->target_id,
+ lun_id);
+ scsi_devise_transport(&path);
+ xpt_release_path(&path);
+ }
+
+ return (device);
+}
+
+static void
+scsi_devise_transport(struct cam_path *path)
+{
+ struct ccb_pathinq cpi;
+ struct ccb_trans_settings cts;
+ struct scsi_inquiry_data *inq_buf;
+
+ /* Get transport information from the SIM */
+ xpt_setup_ccb(&cpi.ccb_h, path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ inq_buf = NULL;
+ if ((path->device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0)
+ inq_buf = &path->device->inq_data;
+ path->device->protocol = PROTO_SCSI;
+ path->device->protocol_version =
+ inq_buf != NULL ? SID_ANSI_REV(inq_buf) : cpi.protocol_version;
+ path->device->transport = cpi.transport;
+ path->device->transport_version = cpi.transport_version;
+
+ /*
+ * Any device not using SPI3 features should
+ * be considered SPI2 or lower.
+ */
+ if (inq_buf != NULL) {
+ if (path->device->transport == XPORT_SPI
+ && (inq_buf->spi3data & SID_SPI_MASK) == 0
+ && path->device->transport_version > 2)
+ path->device->transport_version = 2;
+ } else {
+ struct cam_ed* otherdev;
+
+ for (otherdev = TAILQ_FIRST(&path->target->ed_entries);
+ otherdev != NULL;
+ otherdev = TAILQ_NEXT(otherdev, links)) {
+ if (otherdev != path->device)
+ break;
+ }
+
+ if (otherdev != NULL) {
+ /*
+ * Initially assume the same versioning as
+ * prior luns for this target.
+ */
+ path->device->protocol_version =
+ otherdev->protocol_version;
+ path->device->transport_version =
+ otherdev->transport_version;
+ } else {
+ /* Until we know better, opt for safty */
+ path->device->protocol_version = 2;
+ if (path->device->transport == XPORT_SPI)
+ path->device->transport_version = 2;
+ else
+ path->device->transport_version = 0;
+ }
+ }
+
+ /*
+ * XXX
+ * For a device compliant with SPC-2 we should be able
+ * to determine the transport version supported by
+ * scrutinizing the version descriptors in the
+ * inquiry buffer.
+ */
+
+ /* Tell the controller what we think */
+ xpt_setup_ccb(&cts.ccb_h, path, /*priority*/1);
+ cts.ccb_h.func_code = XPT_SET_TRAN_SETTINGS;
+ cts.type = CTS_TYPE_CURRENT_SETTINGS;
+ cts.transport = path->device->transport;
+ cts.transport_version = path->device->transport_version;
+ cts.protocol = path->device->protocol;
+ cts.protocol_version = path->device->protocol_version;
+ cts.proto_specific.valid = 0;
+ cts.xport_specific.valid = 0;
+ xpt_action((union ccb *)&cts);
+}
+
+static void
+scsi_action(union ccb *start_ccb)
+{
+
+ switch (start_ccb->ccb_h.func_code) {
+ case XPT_SET_TRAN_SETTINGS:
+ {
+ scsi_set_transfer_settings(&start_ccb->cts,
+ start_ccb->ccb_h.path->device,
+ /*async_update*/FALSE);
+ break;
+ }
+ case XPT_SCAN_BUS:
+ scsi_scan_bus(start_ccb->ccb_h.path->periph, start_ccb);
+ break;
+ case XPT_SCAN_LUN:
+ scsi_scan_lun(start_ccb->ccb_h.path->periph,
+ start_ccb->ccb_h.path, start_ccb->crcn.flags,
+ start_ccb);
+ break;
+ case XPT_GET_TRAN_SETTINGS:
+ {
+ struct cam_sim *sim;
+
+ sim = start_ccb->ccb_h.path->bus->sim;
+ (*(sim->sim_action))(sim, start_ccb);
+ break;
+ }
+ default:
+ xpt_action_default(start_ccb);
+ break;
+ }
+}
+
+static void
+scsi_set_transfer_settings(struct ccb_trans_settings *cts, struct cam_ed *device,
+ int async_update)
+{
+ struct ccb_pathinq cpi;
+ struct ccb_trans_settings cur_cts;
+ struct ccb_trans_settings_scsi *scsi;
+ struct ccb_trans_settings_scsi *cur_scsi;
+ struct cam_sim *sim;
+ struct scsi_inquiry_data *inq_data;
+
+ if (device == NULL) {
+ cts->ccb_h.status = CAM_PATH_INVALID;
+ xpt_done((union ccb *)cts);
+ return;
+ }
+
+ if (cts->protocol == PROTO_UNKNOWN
+ || cts->protocol == PROTO_UNSPECIFIED) {
+ cts->protocol = device->protocol;
+ cts->protocol_version = device->protocol_version;
+ }
+
+ if (cts->protocol_version == PROTO_VERSION_UNKNOWN
+ || cts->protocol_version == PROTO_VERSION_UNSPECIFIED)
+ cts->protocol_version = device->protocol_version;
+
+ if (cts->protocol != device->protocol) {
+ xpt_print(cts->ccb_h.path, "Uninitialized Protocol %x:%x?\n",
+ cts->protocol, device->protocol);
+ cts->protocol = device->protocol;
+ }
+
+ if (cts->protocol_version > device->protocol_version) {
+ if (bootverbose) {
+ xpt_print(cts->ccb_h.path, "Down reving Protocol "
+ "Version from %d to %d?\n", cts->protocol_version,
+ device->protocol_version);
+ }
+ cts->protocol_version = device->protocol_version;
+ }
+
+ if (cts->transport == XPORT_UNKNOWN
+ || cts->transport == XPORT_UNSPECIFIED) {
+ cts->transport = device->transport;
+ cts->transport_version = device->transport_version;
+ }
+
+ if (cts->transport_version == XPORT_VERSION_UNKNOWN
+ || cts->transport_version == XPORT_VERSION_UNSPECIFIED)
+ cts->transport_version = device->transport_version;
+
+ if (cts->transport != device->transport) {
+ xpt_print(cts->ccb_h.path, "Uninitialized Transport %x:%x?\n",
+ cts->transport, device->transport);
+ cts->transport = device->transport;
+ }
+
+ if (cts->transport_version > device->transport_version) {
+ if (bootverbose) {
+ xpt_print(cts->ccb_h.path, "Down reving Transport "
+ "Version from %d to %d?\n", cts->transport_version,
+ device->transport_version);
+ }
+ cts->transport_version = device->transport_version;
+ }
+
+ sim = cts->ccb_h.path->bus->sim;
+
+ /*
+ * Nothing more of interest to do unless
+ * this is a device connected via the
+ * SCSI protocol.
+ */
+ if (cts->protocol != PROTO_SCSI) {
+ if (async_update == FALSE)
+ (*(sim->sim_action))(sim, (union ccb *)cts);
+ return;
+ }
+
+ inq_data = &device->inq_data;
+ scsi = &cts->proto_specific.scsi;
+ xpt_setup_ccb(&cpi.ccb_h, cts->ccb_h.path, /*priority*/1);
+ cpi.ccb_h.func_code = XPT_PATH_INQ;
+ xpt_action((union ccb *)&cpi);
+
+ /* SCSI specific sanity checking */
+ if ((cpi.hba_inquiry & PI_TAG_ABLE) == 0
+ || (INQ_DATA_TQ_ENABLED(inq_data)) == 0
+ || (device->queue_flags & SCP_QUEUE_DQUE) != 0
+ || (device->mintags == 0)) {
+ /*
+ * Can't tag on hardware that doesn't support tags,
+ * doesn't have it enabled, or has broken tag support.
+ */
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ }
+
+ if (async_update == FALSE) {
+ /*
+ * Perform sanity checking against what the
+ * controller and device can do.
+ */
+ xpt_setup_ccb(&cur_cts.ccb_h, cts->ccb_h.path, /*priority*/1);
+ cur_cts.ccb_h.func_code = XPT_GET_TRAN_SETTINGS;
+ cur_cts.type = cts->type;
+ xpt_action((union ccb *)&cur_cts);
+ if ((cur_cts.ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) {
+ return;
+ }
+ cur_scsi = &cur_cts.proto_specific.scsi;
+ if ((scsi->valid & CTS_SCSI_VALID_TQ) == 0) {
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ scsi->flags |= cur_scsi->flags & CTS_SCSI_FLAGS_TAG_ENB;
+ }
+ if ((cur_scsi->valid & CTS_SCSI_VALID_TQ) == 0)
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ }
+
+ /* SPI specific sanity checking */
+ if (cts->transport == XPORT_SPI && async_update == FALSE) {
+ u_int spi3caps;
+ struct ccb_trans_settings_spi *spi;
+ struct ccb_trans_settings_spi *cur_spi;
+
+ spi = &cts->xport_specific.spi;
+
+ cur_spi = &cur_cts.xport_specific.spi;
+
+ /* Fill in any gaps in what the user gave us */
+ if ((spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
+ spi->sync_period = cur_spi->sync_period;
+ if ((cur_spi->valid & CTS_SPI_VALID_SYNC_RATE) == 0)
+ spi->sync_period = 0;
+ if ((spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
+ spi->sync_offset = cur_spi->sync_offset;
+ if ((cur_spi->valid & CTS_SPI_VALID_SYNC_OFFSET) == 0)
+ spi->sync_offset = 0;
+ if ((spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
+ spi->ppr_options = cur_spi->ppr_options;
+ if ((cur_spi->valid & CTS_SPI_VALID_PPR_OPTIONS) == 0)
+ spi->ppr_options = 0;
+ if ((spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
+ spi->bus_width = cur_spi->bus_width;
+ if ((cur_spi->valid & CTS_SPI_VALID_BUS_WIDTH) == 0)
+ spi->bus_width = 0;
+ if ((spi->valid & CTS_SPI_VALID_DISC) == 0) {
+ spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
+ spi->flags |= cur_spi->flags & CTS_SPI_FLAGS_DISC_ENB;
+ }
+ if ((cur_spi->valid & CTS_SPI_VALID_DISC) == 0)
+ spi->flags &= ~CTS_SPI_FLAGS_DISC_ENB;
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
+ && (inq_data->flags & SID_Sync) == 0
+ && cts->type == CTS_TYPE_CURRENT_SETTINGS)
+ || ((cpi.hba_inquiry & PI_SDTR_ABLE) == 0)) {
+ /* Force async */
+ spi->sync_period = 0;
+ spi->sync_offset = 0;
+ }
+
+ switch (spi->bus_width) {
+ case MSG_EXT_WDTR_BUS_32_BIT:
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
+ || (inq_data->flags & SID_WBus32) != 0
+ || cts->type == CTS_TYPE_USER_SETTINGS)
+ && (cpi.hba_inquiry & PI_WIDE_32) != 0)
+ break;
+ /* Fall Through to 16-bit */
+ case MSG_EXT_WDTR_BUS_16_BIT:
+ if (((device->flags & CAM_DEV_INQUIRY_DATA_VALID) == 0
+ || (inq_data->flags & SID_WBus16) != 0
+ || cts->type == CTS_TYPE_USER_SETTINGS)
+ && (cpi.hba_inquiry & PI_WIDE_16) != 0) {
+ spi->bus_width = MSG_EXT_WDTR_BUS_16_BIT;
+ break;
+ }
+ /* Fall Through to 8-bit */
+ default: /* New bus width?? */
+ case MSG_EXT_WDTR_BUS_8_BIT:
+ /* All targets can do this */
+ spi->bus_width = MSG_EXT_WDTR_BUS_8_BIT;
+ break;
+ }
+
+ spi3caps = cpi.xport_specific.spi.ppr_options;
+ if ((device->flags & CAM_DEV_INQUIRY_DATA_VALID) != 0
+ && cts->type == CTS_TYPE_CURRENT_SETTINGS)
+ spi3caps &= inq_data->spi3data;
+
+ if ((spi3caps & SID_SPI_CLOCK_DT) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_DT_REQ;
+
+ if ((spi3caps & SID_SPI_IUS) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_IU_REQ;
+
+ if ((spi3caps & SID_SPI_QAS) == 0)
+ spi->ppr_options &= ~MSG_EXT_PPR_QAS_REQ;
+
+ /* No SPI Transfer settings are allowed unless we are wide */
+ if (spi->bus_width == 0)
+ spi->ppr_options = 0;
+
+ if ((spi->valid & CTS_SPI_VALID_DISC)
+ && ((spi->flags & CTS_SPI_FLAGS_DISC_ENB) == 0)) {
+ /*
+ * Can't tag queue without disconnection.
+ */
+ scsi->flags &= ~CTS_SCSI_FLAGS_TAG_ENB;
+ scsi->valid |= CTS_SCSI_VALID_TQ;
+ }
+
+ /*
+ * If we are currently performing tagged transactions to
+ * this device and want to change its negotiation parameters,
+ * go non-tagged for a bit to give the controller a chance to
+ * negotiate unhampered by tag messages.
+ */
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS
+ && (device->inq_flags & SID_CmdQue) != 0
+ && (scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
+ && (spi->flags & (CTS_SPI_VALID_SYNC_RATE|
+ CTS_SPI_VALID_SYNC_OFFSET|
+ CTS_SPI_VALID_BUS_WIDTH)) != 0)
+ scsi_toggle_tags(cts->ccb_h.path);
+ }
+
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS
+ && (scsi->valid & CTS_SCSI_VALID_TQ) != 0) {
+ int device_tagenb;
+
+ /*
+ * If we are transitioning from tags to no-tags or
+ * vice-versa, we need to carefully freeze and restart
+ * the queue so that we don't overlap tagged and non-tagged
+ * commands. We also temporarily stop tags if there is
+ * a change in transfer negotiation settings to allow
+ * "tag-less" negotiation.
+ */
+ if ((device->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
+ || (device->inq_flags & SID_CmdQue) != 0)
+ device_tagenb = TRUE;
+ else
+ device_tagenb = FALSE;
+
+ if (((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0
+ && device_tagenb == FALSE)
+ || ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) == 0
+ && device_tagenb == TRUE)) {
+
+ if ((scsi->flags & CTS_SCSI_FLAGS_TAG_ENB) != 0) {
+ /*
+ * Delay change to use tags until after a
+ * few commands have gone to this device so
+ * the controller has time to perform transfer
+ * negotiations without tagged messages getting
+ * in the way.
+ */
+ device->tag_delay_count = CAM_TAG_DELAY_COUNT;
+ device->flags |= CAM_DEV_TAG_AFTER_COUNT;
+ } else {
+ struct ccb_relsim crs;
+
+ xpt_freeze_devq(cts->ccb_h.path, /*count*/1);
+ device->inq_flags &= ~SID_CmdQue;
+ xpt_dev_ccbq_resize(cts->ccb_h.path,
+ sim->max_dev_openings);
+ device->flags &= ~CAM_DEV_TAG_AFTER_COUNT;
+ device->tag_delay_count = 0;
+
+ xpt_setup_ccb(&crs.ccb_h, cts->ccb_h.path,
+ /*priority*/1);
+ crs.ccb_h.func_code = XPT_REL_SIMQ;
+ crs.release_flags = RELSIM_RELEASE_AFTER_QEMPTY;
+ crs.openings
+ = crs.release_timeout
+ = crs.qfrozen_cnt
+ = 0;
+ xpt_action((union ccb *)&crs);
+ }
+ }
+ }
+ if (async_update == FALSE)
+ (*(sim->sim_action))(sim, (union ccb *)cts);
+}
+
+static void
+scsi_toggle_tags(struct cam_path *path)
+{
+ struct cam_ed *dev;
+
+ /*
+ * Give controllers a chance to renegotiate
+ * before starting tag operations. We
+ * "toggle" tagged queuing off then on
+ * which causes the tag enable command delay
+ * counter to come into effect.
+ */
+ dev = path->device;
+ if ((dev->flags & CAM_DEV_TAG_AFTER_COUNT) != 0
+ || ((dev->inq_flags & SID_CmdQue) != 0
+ && (dev->inq_flags & (SID_Sync|SID_WBus16|SID_WBus32)) != 0)) {
+ struct ccb_trans_settings cts;
+
+ xpt_setup_ccb(&cts.ccb_h, path, 1);
+ cts.protocol = PROTO_SCSI;
+ cts.protocol_version = PROTO_VERSION_UNSPECIFIED;
+ cts.transport = XPORT_UNSPECIFIED;
+ cts.transport_version = XPORT_VERSION_UNSPECIFIED;
+ cts.proto_specific.scsi.flags = 0;
+ cts.proto_specific.scsi.valid = CTS_SCSI_VALID_TQ;
+ scsi_set_transfer_settings(&cts, path->device,
+ /*async_update*/TRUE);
+ cts.proto_specific.scsi.flags = CTS_SCSI_FLAGS_TAG_ENB;
+ scsi_set_transfer_settings(&cts, path->device,
+ /*async_update*/TRUE);
+ }
+}
+
+/*
+ * Handle any per-device event notifications that require action by the XPT.
+ */
+static void
+scsi_dev_async(u_int32_t async_code, struct cam_eb *bus, struct cam_et *target,
+ struct cam_ed *device, void *async_arg)
+{
+ cam_status status;
+ struct cam_path newpath;
+
+ /*
+ * We only need to handle events for real devices.
+ */
+ if (target->target_id == CAM_TARGET_WILDCARD
+ || device->lun_id == CAM_LUN_WILDCARD)
+ return;
+
+ /*
+ * We need our own path with wildcards expanded to
+ * handle certain types of events.
+ */
+ if ((async_code == AC_SENT_BDR)
+ || (async_code == AC_BUS_RESET)
+ || (async_code == AC_INQ_CHANGED))
+ status = xpt_compile_path(&newpath, NULL,
+ bus->path_id,
+ target->target_id,
+ device->lun_id);
+ else
+ status = CAM_REQ_CMP_ERR;
+
+ if (status == CAM_REQ_CMP) {
+
+ /*
+ * Allow transfer negotiation to occur in a
+ * tag free environment.
+ */
+ if (async_code == AC_SENT_BDR
+ || async_code == AC_BUS_RESET)
+ scsi_toggle_tags(&newpath);
+
+ if (async_code == AC_INQ_CHANGED) {
+ /*
+ * We've sent a start unit command, or
+ * something similar to a device that
+ * may have caused its inquiry data to
+ * change. So we re-scan the device to
+ * refresh the inquiry data for it.
+ */
+ scsi_scan_lun(newpath.periph, &newpath,
+ CAM_EXPECT_INQ_CHANGE, NULL);
+ }
+ xpt_release_path(&newpath);
+ } else if (async_code == AC_LOST_DEVICE) {
+ device->flags |= CAM_DEV_UNCONFIGURED;
+ } else if (async_code == AC_TRANSFER_NEG) {
+ struct ccb_trans_settings *settings;
+
+ settings = (struct ccb_trans_settings *)async_arg;
+ scsi_set_transfer_settings(settings, device,
+ /*async_update*/TRUE);
+ }
+}
+
diff --git a/sys/conf/files b/sys/conf/files
index f33f4e127af8..4d3094bea3e3 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -110,9 +110,13 @@ cam/cam_periph.c optional scbus
cam/cam_queue.c optional scbus
cam/cam_sim.c optional scbus
cam/cam_xpt.c optional scbus
+cam/ata/ata_all.c optional scbus
+cam/ata/ata_xpt.c optional scbus
+cam/scsi/scsi_xpt.c optional scbus
cam/scsi/scsi_all.c optional scbus
cam/scsi/scsi_cd.c optional cd
cam/scsi/scsi_ch.c optional ch
+cam/ata/ata_da.c optional da
cam/scsi/scsi_da.c optional da
cam/scsi/scsi_low.c optional ct | ncv | nsp | stg
cam/scsi/scsi_low_pisa.c optional ct | ncv | nsp | stg
@@ -459,6 +463,7 @@ dev/aha/aha.c optional aha
dev/aha/aha_isa.c optional aha isa
dev/aha/aha_mca.c optional aha mca
dev/ahb/ahb.c optional ahb eisa
+dev/ahci/ahci.c optional ahci pci
dev/aic/aic.c optional aic
dev/aic/aic_pccard.c optional aic pccard
dev/aic7xxx/ahc_eisa.c optional ahc eisa
diff --git a/sys/dev/advansys/advansys.c b/sys/dev/advansys/advansys.c
index 7027bcf0842b..cc18f189d4f5 100644
--- a/sys/dev/advansys/advansys.c
+++ b/sys/dev/advansys/advansys.c
@@ -1345,7 +1345,7 @@ adv_attach(adv)
/* highaddr */ BUS_SPACE_MAXADDR,
/* filter */ NULL,
/* filterarg */ NULL,
- /* maxsize */ MAXPHYS,
+ /* maxsize */ ADV_MAXPHYS,
/* nsegments */ max_sg,
/* maxsegsz */ BUS_SPACE_MAXSIZE_32BIT,
/* flags */ BUS_DMA_ALLOCNOW,
diff --git a/sys/dev/advansys/advlib.h b/sys/dev/advansys/advlib.h
index ac7da33d5c7e..f5d1437a62c0 100644
--- a/sys/dev/advansys/advlib.h
+++ b/sys/dev/advansys/advlib.h
@@ -58,6 +58,8 @@ typedef u_int8_t target_bit_vector;
#define ADV_MAX_TID 7
#define ADV_MAX_LUN 7
+#define ADV_MAXPHYS (128 * 1024)
+
/* Enumeration of board types */
typedef enum {
ADV_NONE = 0x000,
diff --git a/sys/dev/ahci/ahci.c b/sys/dev/ahci/ahci.c
new file mode 100644
index 000000000000..389648adf8ee
--- /dev/null
+++ b/sys/dev/ahci/ahci.c
@@ -0,0 +1,1858 @@
+/*-
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+#include <sys/param.h>
+#include <sys/module.h>
+#include <sys/systm.h>
+#include <sys/kernel.h>
+#include <sys/ata.h>
+#include <sys/bus.h>
+#include <sys/endian.h>
+#include <sys/malloc.h>
+#include <sys/lock.h>
+#include <sys/mutex.h>
+#include <sys/sema.h>
+#include <sys/taskqueue.h>
+#include <vm/uma.h>
+#include <machine/stdarg.h>
+#include <machine/resource.h>
+#include <machine/bus.h>
+#include <sys/rman.h>
+#include <dev/pci/pcivar.h>
+#include <dev/pci/pcireg.h>
+#include "ahci.h"
+
+#include <cam/cam.h>
+#include <cam/cam_ccb.h>
+#include <cam/cam_sim.h>
+#include <cam/cam_xpt_sim.h>
+#include <cam/cam_xpt_periph.h>
+#include <cam/cam_debug.h>
+
+/* local prototypes */
+static int ahci_setup_interrupt(device_t dev);
+static void ahci_intr(void *data);
+static void ahci_intr_one(void *data);
+static int ahci_suspend(device_t dev);
+static int ahci_resume(device_t dev);
+static int ahci_ch_suspend(device_t dev);
+static int ahci_ch_resume(device_t dev);
+static void ahci_ch_intr_locked(void *data);
+static void ahci_ch_intr(void *data);
+static int ahci_ctlr_reset(device_t dev);
+static void ahci_begin_transaction(device_t dev, union ccb *ccb);
+static void ahci_dmasetprd(void *arg, bus_dma_segment_t *segs, int nsegs, int error);
+static void ahci_execute_transaction(struct ahci_slot *slot);
+static void ahci_timeout(struct ahci_slot *slot);
+static void ahci_end_transaction(struct ahci_slot *slot, enum ahci_err_type et);
+static int ahci_setup_fis(struct ahci_cmd_tab *ctp, union ccb *ccb, int tag);
+static void ahci_dmainit(device_t dev);
+static void ahci_dmasetupc_cb(void *xsc, bus_dma_segment_t *segs, int nsegs, int error);
+static void ahci_dmafini(device_t dev);
+static void ahci_slotsalloc(device_t dev);
+static void ahci_slotsfree(device_t dev);
+static void ahci_reset(device_t dev);
+static void ahci_start(device_t dev);
+static void ahci_stop(device_t dev);
+static void ahci_clo(device_t dev);
+static void ahci_start_fr(device_t dev);
+static void ahci_stop_fr(device_t dev);
+
+static int ahci_sata_connect(struct ahci_channel *ch);
+static int ahci_sata_phy_reset(device_t dev, int quick);
+
+static void ahci_issue_read_log(device_t dev);
+static void ahci_process_read_log(device_t dev, union ccb *ccb);
+
+static void ahciaction(struct cam_sim *sim, union ccb *ccb);
+static void ahcipoll(struct cam_sim *sim);
+
+MALLOC_DEFINE(M_AHCI, "AHCI driver", "AHCI driver data buffers");
+
+/*
+ * AHCI v1.x compliant SATA chipset support functions
+ */
+static int
+ahci_probe(device_t dev)
+{
+
+ /* is this a possible AHCI candidate ? */
+ if (pci_get_class(dev) != PCIC_STORAGE ||
+ pci_get_subclass(dev) != PCIS_STORAGE_SATA)
+ return (ENXIO);
+
+ /* is this PCI device flagged as an AHCI compliant chip ? */
+ if (pci_get_progif(dev) != PCIP_STORAGE_SATA_AHCI_1_0)
+ return (ENXIO);
+
+ device_set_desc_copy(dev, "AHCI controller");
+ return (BUS_PROBE_VENDOR);
+}
+
+static int
+ahci_attach(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ device_t child;
+ int error, unit, speed;
+ u_int32_t version, caps;
+
+ ctlr->dev = dev;
+ /* if we have a memory BAR(5) we are likely on an AHCI part */
+ ctlr->r_rid = PCIR_BAR(5);
+ if (!(ctlr->r_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
+ &ctlr->r_rid, RF_ACTIVE)))
+ return ENXIO;
+ /* Setup our own memory management for channels. */
+ ctlr->sc_iomem.rm_type = RMAN_ARRAY;
+ ctlr->sc_iomem.rm_descr = "I/O memory addresses";
+ if ((error = rman_init(&ctlr->sc_iomem)) != 0) {
+ bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
+ return (error);
+ }
+ if ((error = rman_manage_region(&ctlr->sc_iomem,
+ rman_get_start(ctlr->r_mem), rman_get_end(ctlr->r_mem))) != 0) {
+ bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
+ rman_fini(&ctlr->sc_iomem);
+ return (error);
+ }
+ /* Reset controller */
+ if ((error = ahci_ctlr_reset(dev)) != 0) {
+ bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
+ rman_fini(&ctlr->sc_iomem);
+ return (error);
+ };
+ /* Get the number of HW channels */
+ ctlr->ichannels = ATA_INL(ctlr->r_mem, AHCI_PI);
+ ctlr->channels = MAX(flsl(ctlr->ichannels),
+ (ATA_INL(ctlr->r_mem, AHCI_CAP) & AHCI_CAP_NPMASK) + 1);
+ /* Setup interrupts. */
+ if (ahci_setup_interrupt(dev)) {
+ bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
+ rman_fini(&ctlr->sc_iomem);
+ return ENXIO;
+ }
+ /* Announce HW capabilities. */
+ version = ATA_INL(ctlr->r_mem, AHCI_VS);
+ caps = ATA_INL(ctlr->r_mem, AHCI_CAP);
+ speed = (caps & AHCI_CAP_ISS) >> AHCI_CAP_ISS_SHIFT;
+ device_printf(dev,
+ "AHCI v%x.%02x with %d %sGbps ports, Port Multiplier %s\n",
+ ((version >> 20) & 0xf0) + ((version >> 16) & 0x0f),
+ ((version >> 4) & 0xf0) + (version & 0x0f),
+ (caps & AHCI_CAP_NPMASK) + 1,
+ ((speed == 1) ? "1.5":((speed == 2) ? "3":
+ ((speed == 3) ? "6":"?"))),
+ (caps & AHCI_CAP_SPM) ?
+ "supported" : "not supported");
+ if (bootverbose) {
+ device_printf(dev, "Caps:%s%s%s%s%s%s%s%s %sGbps",
+ (caps & AHCI_CAP_64BIT) ? " 64bit":"",
+ (caps & AHCI_CAP_SNCQ) ? " NCQ":"",
+ (caps & AHCI_CAP_SSNTF) ? " SNTF":"",
+ (caps & AHCI_CAP_SMPS) ? " MPS":"",
+ (caps & AHCI_CAP_SSS) ? " SS":"",
+ (caps & AHCI_CAP_SALP) ? " ALP":"",
+ (caps & AHCI_CAP_SAL) ? " AL":"",
+ (caps & AHCI_CAP_SCLO) ? " CLO":"",
+ ((speed == 1) ? "1.5":((speed == 2) ? "3":
+ ((speed == 3) ? "6":"?"))));
+ printf("%s%s%s%s%s%s %dcmd%s%s%s %dports\n",
+ (caps & AHCI_CAP_SAM) ? " AM":"",
+ (caps & AHCI_CAP_SPM) ? " PM":"",
+ (caps & AHCI_CAP_FBSS) ? " FBS":"",
+ (caps & AHCI_CAP_PMD) ? " PMD":"",
+ (caps & AHCI_CAP_SSC) ? " SSC":"",
+ (caps & AHCI_CAP_PSC) ? " PSC":"",
+ ((caps & AHCI_CAP_NCS) >> AHCI_CAP_NCS_SHIFT) + 1,
+ (caps & AHCI_CAP_CCCS) ? " CCC":"",
+ (caps & AHCI_CAP_EMS) ? " EM":"",
+ (caps & AHCI_CAP_SXS) ? " eSATA":"",
+ (caps & AHCI_CAP_NPMASK) + 1);
+ }
+ /* Attach all channels on this controller */
+ for (unit = 0; unit < ctlr->channels; unit++) {
+ if ((ctlr->ichannels & (1 << unit)) == 0)
+ continue;
+ child = device_add_child(dev, "ahcich", -1);
+ if (child == NULL)
+ device_printf(dev, "failed to add channel device\n");
+ else
+ device_set_ivars(child, (void *)(intptr_t)unit);
+ }
+ bus_generic_attach(dev);
+ return 0;
+}
+
+static int
+ahci_detach(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ device_t *children;
+ int nchildren, i;
+
+ /* Detach & delete all children */
+ if (!device_get_children(dev, &children, &nchildren)) {
+ for (i = 0; i < nchildren; i++)
+ device_delete_child(dev, children[i]);
+ free(children, M_TEMP);
+ }
+ /* Free interrupts. */
+ for (i = 0; i < ctlr->numirqs; i++) {
+ if (ctlr->irqs[i].r_irq) {
+ bus_teardown_intr(dev, ctlr->irqs[i].r_irq,
+ ctlr->irqs[i].handle);
+ bus_release_resource(dev, SYS_RES_IRQ,
+ ctlr->irqs[i].r_irq_rid, ctlr->irqs[i].r_irq);
+ }
+ }
+ pci_release_msi(dev);
+ /* Free memory. */
+ rman_fini(&ctlr->sc_iomem);
+ if (ctlr->r_mem)
+ bus_release_resource(dev, SYS_RES_MEMORY, ctlr->r_rid, ctlr->r_mem);
+ return (0);
+}
+
+static int
+ahci_ctlr_reset(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ int timeout;
+
+ if (pci_read_config(dev, 0x00, 4) == 0x28298086 &&
+ (pci_read_config(dev, 0x92, 1) & 0xfe) == 0x04)
+ pci_write_config(dev, 0x92, 0x01, 1);
+ /* Enable AHCI mode */
+ ATA_OUTL(ctlr->r_mem, AHCI_GHC, AHCI_GHC_AE);
+ /* Reset AHCI controller */
+ ATA_OUTL(ctlr->r_mem, AHCI_GHC, AHCI_GHC_AE|AHCI_GHC_HR);
+ for (timeout = 1000; timeout > 0; timeout--) {
+ DELAY(1000);
+ if ((ATA_INL(ctlr->r_mem, AHCI_GHC) & AHCI_GHC_HR) == 0)
+ break;
+ }
+ if (timeout == 0) {
+ device_printf(dev, "AHCI controller reset failure\n");
+ return ENXIO;
+ }
+ /* Reenable AHCI mode */
+ ATA_OUTL(ctlr->r_mem, AHCI_GHC, AHCI_GHC_AE);
+ /* Clear interrupts */
+ ATA_OUTL(ctlr->r_mem, AHCI_IS, ATA_INL(ctlr->r_mem, AHCI_IS));
+ /* Enable AHCI interrupts */
+ ATA_OUTL(ctlr->r_mem, AHCI_GHC,
+ ATA_INL(ctlr->r_mem, AHCI_GHC) | AHCI_GHC_IE);
+ return (0);
+}
+
+static int
+ahci_suspend(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+
+ bus_generic_suspend(dev);
+ /* Disable interupts, so the state change(s) doesn't trigger */
+ ATA_OUTL(ctlr->r_mem, AHCI_GHC,
+ ATA_INL(ctlr->r_mem, AHCI_GHC) & (~AHCI_GHC_IE));
+ return 0;
+}
+
+static int
+ahci_resume(device_t dev)
+{
+ int res;
+
+ if ((res = ahci_ctlr_reset(dev)) != 0)
+ return (res);
+ return (bus_generic_resume(dev));
+}
+
+static int
+ahci_setup_interrupt(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ int i, msi = 1;
+
+ /* Process hints. */
+ resource_int_value(device_get_name(dev),
+ device_get_unit(dev), "msi", &msi);
+ if (msi < 0)
+ msi = 0;
+ else if (msi == 1)
+ msi = min(1, pci_msi_count(dev));
+ else if (msi > 1)
+ msi = pci_msi_count(dev);
+ /* Allocate MSI if needed/present. */
+ if (msi && pci_alloc_msi(dev, &msi) == 0) {
+ ctlr->numirqs = msi;
+ } else {
+ msi = 0;
+ ctlr->numirqs = 1;
+ }
+ /* Check for single MSI vector fallback. */
+ if (ctlr->numirqs > 1 &&
+ (ATA_INL(ctlr->r_mem, AHCI_GHC) & AHCI_GHC_MRSM) != 0) {
+ device_printf(dev, "Falling back to one MSI\n");
+ ctlr->numirqs = 1;
+ }
+ /* Allocate all IRQs. */
+ for (i = 0; i < ctlr->numirqs; i++) {
+ ctlr->irqs[i].ctlr = ctlr;
+ ctlr->irqs[i].r_irq_rid = i + (msi ? 1 : 0);
+ if (ctlr->numirqs == 1 || i >= ctlr->channels)
+ ctlr->irqs[i].mode = AHCI_IRQ_MODE_ALL;
+ else if (i == ctlr->numirqs - 1)
+ ctlr->irqs[i].mode = AHCI_IRQ_MODE_AFTER;
+ else
+ ctlr->irqs[i].mode = AHCI_IRQ_MODE_ONE;
+ if (!(ctlr->irqs[i].r_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ,
+ &ctlr->irqs[i].r_irq_rid, RF_SHAREABLE | RF_ACTIVE))) {
+ device_printf(dev, "unable to map interrupt\n");
+ return ENXIO;
+ }
+ if ((bus_setup_intr(dev, ctlr->irqs[i].r_irq, ATA_INTR_FLAGS, NULL,
+ (ctlr->irqs[i].mode == AHCI_IRQ_MODE_ONE) ? ahci_intr_one : ahci_intr,
+ &ctlr->irqs[i], &ctlr->irqs[i].handle))) {
+ /* SOS XXX release r_irq */
+ device_printf(dev, "unable to setup interrupt\n");
+ return ENXIO;
+ }
+ }
+ return (0);
+}
+
+/*
+ * Common case interrupt handler.
+ */
+static void
+ahci_intr(void *data)
+{
+ struct ahci_controller_irq *irq = data;
+ struct ahci_controller *ctlr = irq->ctlr;
+ u_int32_t is;
+ void *arg;
+ int unit;
+
+ is = ATA_INL(ctlr->r_mem, AHCI_IS);
+ if (irq->mode == AHCI_IRQ_MODE_ALL)
+ unit = 0;
+ else /* AHCI_IRQ_MODE_AFTER */
+ unit = irq->r_irq_rid - 1;
+ for (; unit < ctlr->channels; unit++) {
+ if ((is & (1 << unit)) != 0 &&
+ (arg = ctlr->interrupt[unit].argument)) {
+ ctlr->interrupt[unit].function(arg);
+ ATA_OUTL(ctlr->r_mem, AHCI_IS, 1 << unit);
+ }
+ }
+}
+
+/*
+ * Simplified interrupt handler for multivector MSI mode.
+ */
+static void
+ahci_intr_one(void *data)
+{
+ struct ahci_controller_irq *irq = data;
+ struct ahci_controller *ctlr = irq->ctlr;
+ void *arg;
+ int unit;
+
+ unit = irq->r_irq_rid - 1;
+ if ((arg = ctlr->interrupt[unit].argument))
+ ctlr->interrupt[unit].function(arg);
+}
+
+static struct resource *
+ahci_alloc_resource(device_t dev, device_t child, int type, int *rid,
+ u_long start, u_long end, u_long count, u_int flags)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ int unit = ((struct ahci_channel *)device_get_softc(child))->unit;
+ struct resource *res = NULL;
+ int offset = AHCI_OFFSET + (unit << 7);
+ long st;
+
+ switch (type) {
+ case SYS_RES_MEMORY:
+ st = rman_get_start(ctlr->r_mem);
+ res = rman_reserve_resource(&ctlr->sc_iomem, st + offset,
+ st + offset + 127, 128, RF_ACTIVE, child);
+ if (res) {
+ bus_space_handle_t bsh;
+ bus_space_tag_t bst;
+ bsh = rman_get_bushandle(ctlr->r_mem);
+ bst = rman_get_bustag(ctlr->r_mem);
+ bus_space_subregion(bst, bsh, offset, 128, &bsh);
+ rman_set_bushandle(res, bsh);
+ rman_set_bustag(res, bst);
+ }
+ break;
+ case SYS_RES_IRQ:
+ if (*rid == ATA_IRQ_RID)
+ res = ctlr->irqs[0].r_irq;
+ break;
+ }
+ return (res);
+}
+
+static int
+ahci_release_resource(device_t dev, device_t child, int type, int rid,
+ struct resource *r)
+{
+
+ switch (type) {
+ case SYS_RES_MEMORY:
+ rman_release_resource(r);
+ return (0);
+ case SYS_RES_IRQ:
+ if (rid != ATA_IRQ_RID)
+ return ENOENT;
+ return (0);
+ }
+ return (EINVAL);
+}
+
+static int
+ahci_setup_intr(device_t dev, device_t child, struct resource *irq,
+ int flags, driver_filter_t *filter, driver_intr_t *function,
+ void *argument, void **cookiep)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ int unit = (intptr_t)device_get_ivars(child);
+
+ if (filter != NULL) {
+ printf("ahci.c: we cannot use a filter here\n");
+ return (EINVAL);
+ }
+ ctlr->interrupt[unit].function = function;
+ ctlr->interrupt[unit].argument = argument;
+ return (0);
+}
+
+static int
+ahci_teardown_intr(device_t dev, device_t child, struct resource *irq,
+ void *cookie)
+{
+ struct ahci_controller *ctlr = device_get_softc(dev);
+ int unit = (intptr_t)device_get_ivars(child);
+
+ ctlr->interrupt[unit].function = NULL;
+ ctlr->interrupt[unit].argument = NULL;
+ return (0);
+}
+
+static int
+ahci_print_child(device_t dev, device_t child)
+{
+ int retval;
+
+ retval = bus_print_child_header(dev, child);
+ retval += printf(" at channel %d",
+ (int)(intptr_t)device_get_ivars(child));
+ retval += bus_print_child_footer(dev, child);
+
+ return (retval);
+}
+
+devclass_t ahci_devclass;
+static device_method_t ahci_methods[] = {
+ DEVMETHOD(device_probe, ahci_probe),
+ DEVMETHOD(device_attach, ahci_attach),
+ DEVMETHOD(device_detach, ahci_detach),
+ DEVMETHOD(device_suspend, ahci_suspend),
+ DEVMETHOD(device_resume, ahci_resume),
+ DEVMETHOD(bus_print_child, ahci_print_child),
+ DEVMETHOD(bus_alloc_resource, ahci_alloc_resource),
+ DEVMETHOD(bus_release_resource, ahci_release_resource),
+ DEVMETHOD(bus_setup_intr, ahci_setup_intr),
+ DEVMETHOD(bus_teardown_intr,ahci_teardown_intr),
+ { 0, 0 }
+};
+static driver_t ahci_driver = {
+ "ahci",
+ ahci_methods,
+ sizeof(struct ahci_controller)
+};
+DRIVER_MODULE(ahci, pci, ahci_driver, ahci_devclass, 0, 0);
+MODULE_VERSION(ahci, 1);
+MODULE_DEPEND(ahci, cam, 1, 1, 1);
+
+static int
+ahci_ch_probe(device_t dev)
+{
+
+ device_set_desc_copy(dev, "AHCI channel");
+ return (0);
+}
+
+static int
+ahci_ch_attach(device_t dev)
+{
+ struct ahci_controller *ctlr = device_get_softc(device_get_parent(dev));
+ struct ahci_channel *ch = device_get_softc(dev);
+ struct cam_devq *devq;
+ int rid, error;
+
+ ch->dev = dev;
+ ch->unit = (intptr_t)device_get_ivars(dev);
+ ch->caps = ATA_INL(ctlr->r_mem, AHCI_CAP);
+ ch->numslots = ((ch->caps & AHCI_CAP_NCS) >> AHCI_CAP_NCS_SHIFT) + 1,
+ resource_int_value(device_get_name(dev),
+ device_get_unit(dev), "pm_level", &ch->pm_level);
+ /* Limit speed for my onboard JMicron external port.
+ * It is not eSATA really. */
+ if (pci_get_devid(ctlr->dev) == 0x2363197b &&
+ pci_get_subvendor(ctlr->dev) == 0x1043 &&
+ pci_get_subdevice(ctlr->dev) == 0x81e4 &&
+ ch->unit == 0)
+ ch->sata_rev = 1;
+ resource_int_value(device_get_name(dev),
+ device_get_unit(dev), "sata_rev", &ch->sata_rev);
+ mtx_init(&ch->mtx, "AHCI channel lock", NULL, MTX_DEF);
+ rid = ch->unit;
+ if (!(ch->r_mem = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
+ &rid, RF_ACTIVE)))
+ return (ENXIO);
+ ahci_dmainit(dev);
+ ahci_slotsalloc(dev);
+ ahci_ch_resume(dev);
+ mtx_lock(&ch->mtx);
+ rid = ATA_IRQ_RID;
+ if (!(ch->r_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ,
+ &rid, RF_SHAREABLE | RF_ACTIVE))) {
+ bus_release_resource(dev, SYS_RES_MEMORY, ch->unit, ch->r_mem);
+ device_printf(dev, "Unable to map interrupt\n");
+ return (ENXIO);
+ }
+ if ((bus_setup_intr(dev, ch->r_irq, ATA_INTR_FLAGS, NULL,
+ ahci_ch_intr_locked, dev, &ch->ih))) {
+ device_printf(dev, "Unable to setup interrupt\n");
+ error = ENXIO;
+ goto err1;
+ }
+ /* Create the device queue for our SIM. */
+ devq = cam_simq_alloc(ch->numslots);
+ if (devq == NULL) {
+ device_printf(dev, "Unable to allocate simq\n");
+ error = ENOMEM;
+ goto err1;
+ }
+ /* Construct SIM entry */
+ ch->sim = cam_sim_alloc(ahciaction, ahcipoll, "ahcich", ch,
+ device_get_unit(dev), &ch->mtx, ch->numslots, 0, devq);
+ if (ch->sim == NULL) {
+ device_printf(dev, "unable to allocate sim\n");
+ error = ENOMEM;
+ goto err2;
+ }
+ if (xpt_bus_register(ch->sim, dev, 0) != CAM_SUCCESS) {
+ device_printf(dev, "unable to register xpt bus\n");
+ error = ENXIO;
+ goto err2;
+ }
+ if (xpt_create_path(&ch->path, /*periph*/NULL, cam_sim_path(ch->sim),
+ CAM_TARGET_WILDCARD, CAM_LUN_WILDCARD) != CAM_REQ_CMP) {
+ device_printf(dev, "unable to create path\n");
+ error = ENXIO;
+ goto err3;
+ }
+ mtx_unlock(&ch->mtx);
+ return (0);
+
+err3:
+ xpt_bus_deregister(cam_sim_path(ch->sim));
+err2:
+ cam_sim_free(ch->sim, /*free_devq*/TRUE);
+err1:
+ bus_release_resource(dev, SYS_RES_IRQ, ATA_IRQ_RID, ch->r_irq);
+ bus_release_resource(dev, SYS_RES_MEMORY, ch->unit, ch->r_mem);
+ mtx_unlock(&ch->mtx);
+ return (error);
+}
+
+static int
+ahci_ch_detach(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+
+ mtx_lock(&ch->mtx);
+ xpt_async(AC_LOST_DEVICE, ch->path, NULL);
+ xpt_free_path(ch->path);
+ xpt_bus_deregister(cam_sim_path(ch->sim));
+ cam_sim_free(ch->sim, /*free_devq*/TRUE);
+ mtx_unlock(&ch->mtx);
+
+ bus_teardown_intr(dev, ch->r_irq, ch->ih);
+ bus_release_resource(dev, SYS_RES_IRQ, ATA_IRQ_RID, ch->r_irq);
+
+ ahci_ch_suspend(dev);
+ ahci_slotsfree(dev);
+ ahci_dmafini(dev);
+
+ bus_release_resource(dev, SYS_RES_MEMORY, ch->unit, ch->r_mem);
+ mtx_destroy(&ch->mtx);
+ return (0);
+}
+
+static int
+ahci_ch_suspend(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+
+ /* Disable port interrupts. */
+ ATA_OUTL(ch->r_mem, AHCI_P_IE, 0);
+ /* Reset command register. */
+ ahci_stop(dev);
+ ahci_stop_fr(dev);
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, 0);
+ /* Allow everything, including partial and slumber modes. */
+ ATA_OUTL(ch->r_mem, AHCI_P_SCTL, 0);
+ /* Request slumber mode transition and give some time to get there. */
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, AHCI_P_CMD_SLUMBER);
+ DELAY(100);
+ /* Disable PHY. */
+ ATA_OUTL(ch->r_mem, AHCI_P_SCTL, ATA_SC_DET_DISABLE);
+ return (0);
+}
+
+static int
+ahci_ch_resume(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ uint64_t work;
+
+ /* Disable port interrupts */
+ ATA_OUTL(ch->r_mem, AHCI_P_IE, 0);
+ /* Setup work areas */
+ work = ch->dma.work_bus + AHCI_CL_OFFSET;
+ ATA_OUTL(ch->r_mem, AHCI_P_CLB, work & 0xffffffff);
+ ATA_OUTL(ch->r_mem, AHCI_P_CLBU, work >> 32);
+ work = ch->dma.rfis_bus;
+ ATA_OUTL(ch->r_mem, AHCI_P_FB, work & 0xffffffff);
+ ATA_OUTL(ch->r_mem, AHCI_P_FBU, work >> 32);
+ /* Activate the channel and power/spin up device */
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD,
+ (AHCI_P_CMD_ACTIVE | AHCI_P_CMD_POD | AHCI_P_CMD_SUD |
+ ((ch->pm_level > 1) ? AHCI_P_CMD_ALPE : 0) |
+ ((ch->pm_level > 2) ? AHCI_P_CMD_ASP : 0 )));
+ ahci_start_fr(dev);
+ ahci_start(dev);
+ return (0);
+}
+
+devclass_t ahcich_devclass;
+static device_method_t ahcich_methods[] = {
+ DEVMETHOD(device_probe, ahci_ch_probe),
+ DEVMETHOD(device_attach, ahci_ch_attach),
+ DEVMETHOD(device_detach, ahci_ch_detach),
+ DEVMETHOD(device_suspend, ahci_ch_suspend),
+ DEVMETHOD(device_resume, ahci_ch_resume),
+ { 0, 0 }
+};
+static driver_t ahcich_driver = {
+ "ahcich",
+ ahcich_methods,
+ sizeof(struct ahci_channel)
+};
+DRIVER_MODULE(ahcich, ahci, ahcich_driver, ahci_devclass, 0, 0);
+
+struct ahci_dc_cb_args {
+ bus_addr_t maddr;
+ int error;
+};
+
+static void
+ahci_dmainit(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ struct ahci_dc_cb_args dcba;
+
+ if (ch->caps & AHCI_CAP_64BIT)
+ ch->dma.max_address = BUS_SPACE_MAXADDR;
+ else
+ ch->dma.max_address = BUS_SPACE_MAXADDR_32BIT;
+ /* Command area. */
+ if (bus_dma_tag_create(bus_get_dma_tag(dev), 1024, 0,
+ ch->dma.max_address, BUS_SPACE_MAXADDR,
+ NULL, NULL, AHCI_WORK_SIZE, 1, AHCI_WORK_SIZE,
+ 0, NULL, NULL, &ch->dma.work_tag))
+ goto error;
+ if (bus_dmamem_alloc(ch->dma.work_tag, (void **)&ch->dma.work, 0,
+ &ch->dma.work_map))
+ goto error;
+ if (bus_dmamap_load(ch->dma.work_tag, ch->dma.work_map, ch->dma.work,
+ AHCI_WORK_SIZE, ahci_dmasetupc_cb, &dcba, 0) || dcba.error) {
+ bus_dmamem_free(ch->dma.work_tag, ch->dma.work, ch->dma.work_map);
+ goto error;
+ }
+ ch->dma.work_bus = dcba.maddr;
+ /* FIS receive area. */
+ if (bus_dma_tag_create(bus_get_dma_tag(dev), 4096, 0,
+ ch->dma.max_address, BUS_SPACE_MAXADDR,
+ NULL, NULL, 4096, 1, 4096,
+ 0, NULL, NULL, &ch->dma.rfis_tag))
+ goto error;
+ if (bus_dmamem_alloc(ch->dma.rfis_tag, (void **)&ch->dma.rfis, 0,
+ &ch->dma.rfis_map))
+ goto error;
+ if (bus_dmamap_load(ch->dma.rfis_tag, ch->dma.rfis_map, ch->dma.rfis,
+ 4096, ahci_dmasetupc_cb, &dcba, 0) || dcba.error) {
+ bus_dmamem_free(ch->dma.rfis_tag, ch->dma.rfis, ch->dma.rfis_map);
+ goto error;
+ }
+ ch->dma.rfis_bus = dcba.maddr;
+ /* Data area. */
+ if (bus_dma_tag_create(bus_get_dma_tag(dev), 2, 0,
+ ch->dma.max_address, BUS_SPACE_MAXADDR,
+ NULL, NULL,
+ AHCI_SG_ENTRIES * PAGE_SIZE * ch->numslots,
+ AHCI_SG_ENTRIES, AHCI_PRD_MAX,
+ 0, busdma_lock_mutex, &ch->mtx, &ch->dma.data_tag)) {
+ goto error;
+ }
+ return;
+
+error:
+ device_printf(dev, "WARNING - DMA initialization failed\n");
+ ahci_dmafini(dev);
+}
+
+static void
+ahci_dmasetupc_cb(void *xsc, bus_dma_segment_t *segs, int nsegs, int error)
+{
+ struct ahci_dc_cb_args *dcba = (struct ahci_dc_cb_args *)xsc;
+
+ if (!(dcba->error = error))
+ dcba->maddr = segs[0].ds_addr;
+}
+
+static void
+ahci_dmafini(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+
+ if (ch->dma.data_tag) {
+ bus_dma_tag_destroy(ch->dma.data_tag);
+ ch->dma.data_tag = NULL;
+ }
+ if (ch->dma.rfis_bus) {
+ bus_dmamap_unload(ch->dma.rfis_tag, ch->dma.rfis_map);
+ bus_dmamem_free(ch->dma.rfis_tag, ch->dma.rfis, ch->dma.rfis_map);
+ ch->dma.rfis_bus = 0;
+ ch->dma.rfis_map = NULL;
+ ch->dma.rfis = NULL;
+ }
+ if (ch->dma.work_bus) {
+ bus_dmamap_unload(ch->dma.work_tag, ch->dma.work_map);
+ bus_dmamem_free(ch->dma.work_tag, ch->dma.work, ch->dma.work_map);
+ ch->dma.work_bus = 0;
+ ch->dma.work_map = NULL;
+ ch->dma.work = NULL;
+ }
+ if (ch->dma.work_tag) {
+ bus_dma_tag_destroy(ch->dma.work_tag);
+ ch->dma.work_tag = NULL;
+ }
+}
+
+static void
+ahci_slotsalloc(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ int i;
+
+ /* Alloc and setup command/dma slots */
+ bzero(ch->slot, sizeof(ch->slot));
+ for (i = 0; i < ch->numslots; i++) {
+ struct ahci_slot *slot = &ch->slot[i];
+
+ slot->dev = dev;
+ slot->slot = i;
+ slot->state = AHCI_SLOT_EMPTY;
+ slot->ccb = NULL;
+ callout_init_mtx(&slot->timeout, &ch->mtx, 0);
+
+ if (bus_dmamap_create(ch->dma.data_tag, 0, &slot->dma.data_map))
+ device_printf(ch->dev, "FAILURE - create data_map\n");
+ }
+}
+
+static void
+ahci_slotsfree(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ int i;
+
+ /* Free all dma slots */
+ for (i = 0; i < ch->numslots; i++) {
+ struct ahci_slot *slot = &ch->slot[i];
+
+ if (slot->dma.data_map) {
+ bus_dmamap_destroy(ch->dma.data_tag, slot->dma.data_map);
+ slot->dma.data_map = NULL;
+ }
+ }
+}
+
+static void
+ahci_phy_check_events(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t error = ATA_INL(ch->r_mem, AHCI_P_SERR);
+
+ /* Clear error bits/interrupt */
+ ATA_OUTL(ch->r_mem, AHCI_P_SERR, error);
+ /* If we have a connection event, deal with it */
+ if ((error & ATA_SE_PHY_CHANGED) && (ch->pm_level == 0)) {
+ u_int32_t status = ATA_INL(ch->r_mem, AHCI_P_SSTS);
+ if (((status & ATA_SS_DET_MASK) == ATA_SS_DET_PHY_ONLINE) &&
+ ((status & ATA_SS_SPD_MASK) != ATA_SS_SPD_NO_SPEED) &&
+ ((status & ATA_SS_IPM_MASK) == ATA_SS_IPM_ACTIVE)) {
+ if (bootverbose)
+ device_printf(dev, "CONNECT requested\n");
+ ahci_reset(dev);
+ } else {
+ if (bootverbose)
+ device_printf(dev, "DISCONNECT requested\n");
+ ch->devices = 0;
+ }
+ }
+}
+
+static void
+ahci_ch_intr_locked(void *data)
+{
+ device_t dev = (device_t)data;
+ struct ahci_channel *ch = device_get_softc(dev);
+
+ mtx_lock(&ch->mtx);
+ ahci_ch_intr(data);
+ mtx_unlock(&ch->mtx);
+}
+
+static void
+ahci_ch_intr(void *data)
+{
+ device_t dev = (device_t)data;
+ struct ahci_channel *ch = device_get_softc(dev);
+ uint32_t istatus, cstatus, sstatus, ok, err;
+ enum ahci_err_type et;
+ int i, ccs, ncq_err = 0;
+
+ /* Read and clear interrupt statuses. */
+ istatus = ATA_INL(ch->r_mem, AHCI_P_IS);
+ ATA_OUTL(ch->r_mem, AHCI_P_IS, istatus);
+ /* Read command statuses. */
+ cstatus = ATA_INL(ch->r_mem, AHCI_P_CI);
+ sstatus = ATA_INL(ch->r_mem, AHCI_P_SACT);
+ /* Process PHY events */
+ if (istatus & (AHCI_P_IX_PRC | AHCI_P_IX_PC))
+ ahci_phy_check_events(dev);
+ /* Process command errors */
+ if (istatus & (AHCI_P_IX_IF | AHCI_P_IX_HBD | AHCI_P_IX_HBF |
+ AHCI_P_IX_TFE | AHCI_P_IX_OF)) {
+//device_printf(dev, "%s ERROR is %08x cs %08x ss %08x rs %08x tfd %02x serr %08x\n",
+// __func__, istatus, cstatus, sstatus, ch->rslots, ATA_INL(ch->r_mem, AHCI_P_TFD),
+// ATA_INL(ch->r_mem, AHCI_P_SERR));
+ ccs = (ATA_INL(ch->r_mem, AHCI_P_CMD) & AHCI_P_CMD_CCS_MASK)
+ >> AHCI_P_CMD_CCS_SHIFT;
+ /* Kick controller into sane state */
+ ahci_stop(dev);
+ ahci_start(dev);
+ ok = ch->rslots & ~(cstatus | sstatus);
+ err = ch->rslots & (cstatus | sstatus);
+ } else {
+ ccs = 0;
+ ok = ch->rslots & ~(cstatus | sstatus);
+ err = 0;
+ }
+ /* Complete all successfull commands. */
+ for (i = 0; i < ch->numslots; i++) {
+ if ((ok >> i) & 1)
+ ahci_end_transaction(&ch->slot[i], AHCI_ERR_NONE);
+ }
+ /* On error, complete the rest of commands with error statuses. */
+ if (err) {
+ if (!ch->readlog)
+ xpt_freeze_simq(ch->sim, ch->numrslots);
+ if (ch->frozen) {
+ union ccb *fccb = ch->frozen;
+ ch->frozen = NULL;
+ fccb->ccb_h.status = CAM_REQUEUE_REQ | CAM_RELEASE_SIMQ;
+ xpt_done(fccb);
+ }
+ for (i = 0; i < ch->numslots; i++) {
+ /* XXX: reqests in loading state. */
+ if (((err >> i) & 1) == 0)
+ continue;
+ if (istatus & AHCI_P_IX_TFE) {
+ /* Task File Error */
+ if (ch->numtslots == 0) {
+ /* Untagged operation. */
+ if (i == ccs)
+ et = AHCI_ERR_TFE;
+ else
+ et = AHCI_ERR_INNOCENT;
+ } else {
+ /* Tagged operation. */
+ et = AHCI_ERR_NCQ;
+ ncq_err = 1;
+ }
+ } else if (istatus & AHCI_P_IX_IF) {
+ /* SATA error */
+ et = AHCI_ERR_SATA;
+ } else
+ et = AHCI_ERR_INVALID;
+ ahci_end_transaction(&ch->slot[i], et);
+ }
+ if (ncq_err)
+ ahci_issue_read_log(dev);
+ }
+}
+
+/* Must be called with channel locked. */
+static int
+ahci_check_collision(device_t dev, union ccb *ccb)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA)) {
+ /* Tagged command while untagged are active. */
+ if (ch->numrslots != 0 && ch->numtslots == 0)
+ return (1);
+ /* Tagged command while tagged to other target is active. */
+ if (ch->numtslots != 0 &&
+ ch->taggedtarget != ccb->ccb_h.target_id)
+ return (1);
+ } else {
+ /* Untagged command while tagged are active. */
+ if (ch->numrslots != 0 && ch->numtslots != 0)
+ return (1);
+ }
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & (CAM_ATAIO_CONTROL | CAM_ATAIO_NEEDRESULT))) {
+ /* Atomic command while anything active. */
+ if (ch->numrslots != 0)
+ return (1);
+ }
+ /* We have some atomic command running. */
+ if (ch->aslots != 0)
+ return (1);
+ return (0);
+}
+
+/* Must be called with channel locked. */
+static void
+ahci_begin_transaction(device_t dev, union ccb *ccb)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ struct ahci_slot *slot;
+ int tag;
+
+ /* Choose empty slot. */
+ tag = ch->lastslot;
+ do {
+ tag++;
+ if (tag >= ch->numslots)
+ tag = 0;
+ if (ch->slot[tag].state == AHCI_SLOT_EMPTY)
+ break;
+ } while (tag != ch->lastslot);
+ if (ch->slot[tag].state != AHCI_SLOT_EMPTY)
+ device_printf(ch->dev, "ALL SLOTS BUSY!\n");
+ ch->lastslot = tag;
+ /* Occupy chosen slot. */
+ slot = &ch->slot[tag];
+ slot->ccb = ccb;
+ /* Update channel stats. */
+ ch->numrslots++;
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA)) {
+ ch->numtslots++;
+ ch->taggedtarget = ccb->ccb_h.target_id;
+ }
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & (CAM_ATAIO_CONTROL | CAM_ATAIO_NEEDRESULT)))
+ ch->aslots |= (1 << slot->slot);
+ slot->dma.nsegs = 0;
+ /* If request moves data, setup and load SG list */
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
+ void *buf;
+ bus_size_t size;
+
+ slot->state = AHCI_SLOT_LOADING;
+ if (ccb->ccb_h.func_code == XPT_ATA_IO) {
+ buf = ccb->ataio.data_ptr;
+ size = ccb->ataio.dxfer_len;
+ } else {
+ buf = ccb->csio.data_ptr;
+ size = ccb->csio.dxfer_len;
+ }
+ bus_dmamap_load(ch->dma.data_tag, slot->dma.data_map,
+ buf, size, ahci_dmasetprd, slot, 0);
+ } else
+ ahci_execute_transaction(slot);
+}
+
+/* Locked by busdma engine. */
+static void
+ahci_dmasetprd(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
+{
+ struct ahci_slot *slot = arg;
+ struct ahci_channel *ch = device_get_softc(slot->dev);
+ struct ahci_cmd_tab *ctp;
+ struct ahci_dma_prd *prd;
+ int i;
+
+ if (error) {
+ device_printf(slot->dev, "DMA load error\n");
+ if (!ch->readlog)
+ xpt_freeze_simq(ch->sim, 1);
+ ahci_end_transaction(slot, AHCI_ERR_INVALID);
+ return;
+ }
+ KASSERT(nsegs <= AHCI_SG_ENTRIES, ("too many DMA segment entries\n"));
+ /* Get a piece of the workspace for this request */
+ ctp = (struct ahci_cmd_tab *)
+ (ch->dma.work + AHCI_CT_OFFSET + (AHCI_CT_SIZE * slot->slot));
+ /* Fill S/G table */
+ prd = &ctp->prd_tab[0];
+ for (i = 0; i < nsegs; i++) {
+ prd[i].dba = htole64(segs[i].ds_addr);
+ prd[i].dbc = htole32((segs[i].ds_len - 1) & AHCI_PRD_MASK);
+ }
+ slot->dma.nsegs = nsegs;
+ bus_dmamap_sync(ch->dma.data_tag, slot->dma.data_map,
+ ((slot->ccb->ccb_h.flags & CAM_DIR_IN) ?
+ BUS_DMASYNC_PREREAD : BUS_DMASYNC_PREWRITE));
+ ahci_execute_transaction(slot);
+}
+
+/* Must be called with channel locked. */
+static void
+ahci_execute_transaction(struct ahci_slot *slot)
+{
+ device_t dev = slot->dev;
+ struct ahci_channel *ch = device_get_softc(dev);
+ struct ahci_cmd_tab *ctp;
+ struct ahci_cmd_list *clp;
+ union ccb *ccb = slot->ccb;
+ int port = ccb->ccb_h.target_id & 0x0f;
+ int fis_size;
+
+ /* Get a piece of the workspace for this request */
+ ctp = (struct ahci_cmd_tab *)
+ (ch->dma.work + AHCI_CT_OFFSET + (AHCI_CT_SIZE * slot->slot));
+ /* Setup the FIS for this request */
+ if (!(fis_size = ahci_setup_fis(ctp, ccb, slot->slot))) {
+ device_printf(ch->dev, "Setting up SATA FIS failed\n");
+ if (!ch->readlog)
+ xpt_freeze_simq(ch->sim, 1);
+ ahci_end_transaction(slot, AHCI_ERR_INVALID);
+ return;
+ }
+ /* Setup the command list entry */
+ clp = (struct ahci_cmd_list *)
+ (ch->dma.work + AHCI_CL_OFFSET + (AHCI_CL_SIZE * slot->slot));
+ clp->prd_length = slot->dma.nsegs;
+ clp->cmd_flags = (ccb->ccb_h.flags & CAM_DIR_OUT ? AHCI_CMD_WRITE : 0) |
+ (ccb->ccb_h.func_code == XPT_SCSI_IO ?
+ (AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH) : 0) |
+ (fis_size / sizeof(u_int32_t)) |
+ (port << 12);
+ /* Special handling for Soft Reset command. */
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_CONTROL) &&
+ (ccb->ataio.cmd.control & ATA_A_RESET)) {
+ /* Kick controller into sane state */
+ ahci_stop(dev);
+ ahci_clo(dev);
+ ahci_start(dev);
+ clp->cmd_flags |= AHCI_CMD_RESET | AHCI_CMD_CLR_BUSY;
+ }
+ clp->bytecount = 0;
+ clp->cmd_table_phys = htole64(ch->dma.work_bus + AHCI_CT_OFFSET +
+ (AHCI_CT_SIZE * slot->slot));
+ bus_dmamap_sync(ch->dma.work_tag, ch->dma.work_map,
+ BUS_DMASYNC_PREWRITE);
+ bus_dmamap_sync(ch->dma.rfis_tag, ch->dma.rfis_map,
+ BUS_DMASYNC_PREREAD);
+ /* Set ACTIVE bit for NCQ commands. */
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA)) {
+ ATA_OUTL(ch->r_mem, AHCI_P_SACT, 1 << slot->slot);
+ }
+ /* Issue command to the controller. */
+ slot->state = AHCI_SLOT_RUNNING;
+ ch->rslots |= (1 << slot->slot);
+ ATA_OUTL(ch->r_mem, AHCI_P_CI, (1 << slot->slot));
+ /* Device reset commands doesn't interrupt. Poll them. */
+ if (ccb->ccb_h.func_code == XPT_ATA_IO &&
+ (ccb->ataio.cmd.command == ATA_DEVICE_RESET ||
+ (ccb->ataio.cmd.flags & CAM_ATAIO_CONTROL))) {
+ int count, timeout = ccb->ccb_h.timeout;
+ enum ahci_err_type et = AHCI_ERR_NONE;
+
+ for (count = 0; count < timeout; count++) {
+ DELAY(1000);
+ if (!(ATA_INL(ch->r_mem, AHCI_P_CI) & (1 << slot->slot)))
+ break;
+ if (ATA_INL(ch->r_mem, AHCI_P_TFD) & ATA_S_ERROR) {
+ device_printf(ch->dev,
+ "Poll error on slot %d, TFD: %04x\n",
+ slot->slot, ATA_INL(ch->r_mem, AHCI_P_TFD));
+ et = AHCI_ERR_TFE;
+ break;
+ }
+ }
+ if (timeout && (count >= timeout)) {
+ device_printf(ch->dev,
+ "Poll timeout on slot %d\n", slot->slot);
+ et = AHCI_ERR_TIMEOUT;
+ }
+ if (et != AHCI_ERR_NONE) {
+ /* Kick controller into sane state */
+ ahci_stop(ch->dev);
+ ahci_start(ch->dev);
+ xpt_freeze_simq(ch->sim, 1);
+ }
+ ahci_end_transaction(slot, et);
+ return;
+ }
+ /* Start command execution timeout */
+ callout_reset(&slot->timeout, (int)ccb->ccb_h.timeout * hz / 1000,
+ (timeout_t*)ahci_timeout, slot);
+ return;
+}
+
+/* Locked by callout mechanism. */
+static void
+ahci_timeout(struct ahci_slot *slot)
+{
+ device_t dev = slot->dev;
+ struct ahci_channel *ch = device_get_softc(dev);
+ int i;
+
+ device_printf(dev, "Timeout on slot %d\n", slot->slot);
+ /* Kick controller into sane state. */
+ ahci_stop(ch->dev);
+ ahci_start(ch->dev);
+
+ if (!ch->readlog)
+ xpt_freeze_simq(ch->sim, ch->numrslots);
+ /* Handle command with timeout. */
+ ahci_end_transaction(&ch->slot[slot->slot], AHCI_ERR_TIMEOUT);
+ /* Handle the rest of commands. */
+ if (ch->frozen) {
+ union ccb *fccb = ch->frozen;
+ ch->frozen = NULL;
+ fccb->ccb_h.status = CAM_REQUEUE_REQ | CAM_RELEASE_SIMQ;
+ xpt_done(fccb);
+ }
+ for (i = 0; i < ch->numslots; i++) {
+ /* Do we have a running request on slot? */
+ if (ch->slot[i].state < AHCI_SLOT_RUNNING)
+ continue;
+ ahci_end_transaction(&ch->slot[i], AHCI_ERR_INNOCENT);
+ }
+}
+
+/* Must be called with channel locked. */
+static void
+ahci_end_transaction(struct ahci_slot *slot, enum ahci_err_type et)
+{
+ device_t dev = slot->dev;
+ struct ahci_channel *ch = device_get_softc(dev);
+ union ccb *ccb = slot->ccb;
+
+ /* Cancel command execution timeout */
+ callout_stop(&slot->timeout);
+ bus_dmamap_sync(ch->dma.work_tag, ch->dma.work_map,
+ BUS_DMASYNC_POSTWRITE);
+ /* Read result registers to the result struct
+ * May be incorrect if several commands finished same time,
+ * so read only when sure or have to.
+ */
+ if (ccb->ccb_h.func_code == XPT_ATA_IO) {
+ struct ata_res *res = &ccb->ataio.res;
+
+ if ((et == AHCI_ERR_TFE) ||
+ (ccb->ataio.cmd.flags & CAM_ATAIO_NEEDRESULT)) {
+ u_int8_t *fis = ch->dma.rfis + 0x40;
+ uint16_t tfd = ATA_INL(ch->r_mem, AHCI_P_TFD);
+
+ bus_dmamap_sync(ch->dma.rfis_tag, ch->dma.rfis_map,
+ BUS_DMASYNC_POSTREAD);
+ res->status = tfd;
+ res->error = tfd >> 8;
+ res->lba_low = fis[4];
+ res->lba_mid = fis[5];
+ res->lba_high = fis[6];
+ res->device = fis[7];
+ res->lba_low_exp = fis[8];
+ res->lba_mid_exp = fis[9];
+ res->lba_high_exp = fis[10];
+ res->sector_count = fis[12];
+ res->sector_count_exp = fis[13];
+ } else
+ bzero(res, sizeof(*res));
+ }
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE) {
+ bus_dmamap_sync(ch->dma.data_tag, slot->dma.data_map,
+ (ccb->ccb_h.flags & CAM_DIR_IN) ?
+ BUS_DMASYNC_POSTREAD : BUS_DMASYNC_POSTWRITE);
+ bus_dmamap_unload(ch->dma.data_tag, slot->dma.data_map);
+ }
+ /* Set proper result status. */
+ ccb->ccb_h.status &= ~CAM_STATUS_MASK;
+ if (et != AHCI_ERR_NONE)
+ ccb->ccb_h.status |= CAM_RELEASE_SIMQ;
+ switch (et) {
+ case AHCI_ERR_NONE:
+ ccb->ccb_h.status |= CAM_REQ_CMP;
+ if (ccb->ccb_h.func_code == XPT_SCSI_IO)
+ ccb->csio.scsi_status = SCSI_STATUS_OK;
+ break;
+ case AHCI_ERR_INVALID:
+ ccb->ccb_h.status |= CAM_REQ_INVALID;
+ break;
+ case AHCI_ERR_INNOCENT:
+ ccb->ccb_h.status |= CAM_REQUEUE_REQ;
+ break;
+ case AHCI_ERR_TFE:
+ if (ccb->ccb_h.func_code == XPT_SCSI_IO) {
+ ccb->ccb_h.status |= CAM_SCSI_STATUS_ERROR;
+ ccb->csio.scsi_status = SCSI_STATUS_CHECK_COND;
+ } else {
+ ccb->ccb_h.status |= CAM_ATA_STATUS_ERROR;
+ }
+ break;
+ case AHCI_ERR_SATA:
+ ccb->ccb_h.status |= CAM_UNCOR_PARITY;
+ break;
+ case AHCI_ERR_TIMEOUT:
+ ccb->ccb_h.status |= CAM_CMD_TIMEOUT;
+ break;
+ case AHCI_ERR_NCQ:
+ ccb->ccb_h.status |= CAM_ATA_STATUS_ERROR;
+ default:
+ ccb->ccb_h.status |= CAM_REQ_CMP_ERR;
+ }
+ /* Free slot. */
+ ch->rslots &= ~(1 << slot->slot);
+ ch->aslots &= ~(1 << slot->slot);
+ slot->state = AHCI_SLOT_EMPTY;
+ slot->ccb = NULL;
+ /* Update channel stats. */
+ ch->numrslots--;
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA)) {
+ ch->numtslots--;
+ }
+ /* If it was first request of reset sequence and there is no error,
+ * proceed to second request. */
+ if ((ccb->ccb_h.func_code == XPT_ATA_IO) &&
+ (ccb->ataio.cmd.flags & CAM_ATAIO_CONTROL) &&
+ (ccb->ataio.cmd.control & ATA_A_RESET) &&
+ et == AHCI_ERR_NONE) {
+ ccb->ataio.cmd.control &= ~ATA_A_RESET;
+ ahci_begin_transaction(dev, ccb);
+ return;
+ }
+ /* If it was NCQ command error, put result on hold. */
+ if (et == AHCI_ERR_NCQ) {
+ ch->hold[slot->slot] = ccb;
+ } else if (ch->readlog) /* If it was our READ LOG command - process it. */
+ ahci_process_read_log(dev, ccb);
+ else
+ xpt_done(ccb);
+ /* Unfreeze frozen command. */
+ if (ch->frozen && ch->numrslots == 0) {
+ union ccb *fccb = ch->frozen;
+ ch->frozen = NULL;
+ ahci_begin_transaction(dev, fccb);
+ xpt_release_simq(ch->sim, TRUE);
+ }
+}
+
+static void
+ahci_issue_read_log(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ union ccb *ccb;
+ struct ccb_ataio *ataio;
+ int i;
+
+ ch->readlog = 1;
+ /* Find some holden command. */
+ for (i = 0; i < ch->numslots; i++) {
+ if (ch->hold[i])
+ break;
+ }
+ ccb = xpt_alloc_ccb_nowait();
+ if (ccb == NULL) {
+ device_printf(dev, "Unable allocate READ LOG command");
+ return; /* XXX */
+ }
+ ccb->ccb_h = ch->hold[i]->ccb_h; /* Reuse old header. */
+ ccb->ccb_h.func_code = XPT_ATA_IO;
+ ccb->ccb_h.flags = CAM_DIR_IN;
+ ccb->ccb_h.timeout = 1000; /* 1s should be enough. */
+ ataio = &ccb->ataio;
+ ataio->data_ptr = malloc(512, M_AHCI, M_NOWAIT);
+ if (ataio->data_ptr == NULL) {
+ device_printf(dev, "Unable allocate memory for READ LOG command");
+ return; /* XXX */
+ }
+ ataio->dxfer_len = 512;
+ bzero(&ataio->cmd, sizeof(ataio->cmd));
+ ataio->cmd.flags = CAM_ATAIO_48BIT;
+ ataio->cmd.command = 0x2F; /* READ LOG EXT */
+ ataio->cmd.sector_count = 1;
+ ataio->cmd.sector_count_exp = 0;
+ ataio->cmd.lba_low = 0x10;
+ ataio->cmd.lba_mid = 0;
+ ataio->cmd.lba_mid_exp = 0;
+
+ ahci_begin_transaction(dev, ccb);
+}
+
+static void
+ahci_process_read_log(device_t dev, union ccb *ccb)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ uint8_t *data;
+ struct ata_res *res;
+ int i;
+
+ ch->readlog = 0;
+
+ data = ccb->ataio.data_ptr;
+ if ((ccb->ccb_h.status & CAM_STATUS_MASK) == CAM_REQ_CMP &&
+ (data[0] & 0x80) == 0) {
+ for (i = 0; i < ch->numslots; i++) {
+ if (!ch->hold[i])
+ continue;
+ if ((data[0] & 0x1F) == i) {
+ res = &ch->hold[i]->ataio.res;
+ res->status = data[2];
+ res->error = data[3];
+ res->lba_low = data[4];
+ res->lba_mid = data[5];
+ res->lba_high = data[6];
+ res->device = data[7];
+ res->lba_low_exp = data[8];
+ res->lba_mid_exp = data[9];
+ res->lba_high_exp = data[10];
+ res->sector_count = data[12];
+ res->sector_count_exp = data[13];
+ } else {
+ ch->hold[i]->ccb_h.status &= ~CAM_STATUS_MASK;
+ ch->hold[i]->ccb_h.status |= CAM_REQUEUE_REQ;
+ }
+ xpt_done(ch->hold[i]);
+ ch->hold[i] = NULL;
+ }
+ } else {
+ if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP)
+ device_printf(dev, "Error while READ LOG EXT\n");
+ else if ((data[0] & 0x80) == 0) {
+ device_printf(dev, "Non-queued command error in READ LOG EXT\n");
+ }
+ for (i = 0; i < ch->numslots; i++) {
+ if (!ch->hold[i])
+ continue;
+ xpt_done(ch->hold[i]);
+ ch->hold[i] = NULL;
+ }
+ }
+ free(ccb->ataio.data_ptr, M_AHCI);
+ xpt_free_ccb(ccb);
+}
+
+static void
+ahci_start(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t cmd;
+
+ /* Clear SATA error register */
+ ATA_OUTL(ch->r_mem, AHCI_P_SERR, 0xFFFFFFFF);
+ /* Clear any interrupts pending on this channel */
+ ATA_OUTL(ch->r_mem, AHCI_P_IS, 0xFFFFFFFF);
+ /* Start operations on this channel */
+ cmd = ATA_INL(ch->r_mem, AHCI_P_CMD);
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, cmd | AHCI_P_CMD_ST |
+ (ch->pm_present ? AHCI_P_CMD_PMA : 0));
+}
+
+static void
+ahci_stop(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t cmd;
+ int timeout;
+
+ /* Kill all activity on this channel */
+ cmd = ATA_INL(ch->r_mem, AHCI_P_CMD);
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, cmd & ~AHCI_P_CMD_ST);
+ /* Wait for activity stop. */
+ timeout = 0;
+ do {
+ DELAY(1000);
+ if (timeout++ > 1000) {
+ device_printf(dev, "stopping AHCI engine failed\n");
+ break;
+ }
+ } while (ATA_INL(ch->r_mem, AHCI_P_CMD) & AHCI_P_CMD_CR);
+}
+
+static void
+ahci_clo(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t cmd;
+ int timeout;
+
+ /* Issue Command List Override if supported */
+ if (ch->caps & AHCI_CAP_SCLO) {
+ cmd = ATA_INL(ch->r_mem, AHCI_P_CMD);
+ cmd |= AHCI_P_CMD_CLO;
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, cmd);
+ timeout = 0;
+ do {
+ DELAY(1000);
+ if (timeout++ > 1000) {
+ device_printf(dev, "executing CLO failed\n");
+ break;
+ }
+ } while (ATA_INL(ch->r_mem, AHCI_P_CMD) & AHCI_P_CMD_CLO);
+ }
+}
+
+static void
+ahci_stop_fr(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t cmd;
+ int timeout;
+
+ /* Kill all FIS reception on this channel */
+ cmd = ATA_INL(ch->r_mem, AHCI_P_CMD);
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, cmd & ~AHCI_P_CMD_FRE);
+ /* Wait for FIS reception stop. */
+ timeout = 0;
+ do {
+ DELAY(1000);
+ if (timeout++ > 1000) {
+ device_printf(dev, "stopping AHCI FR engine failed\n");
+ break;
+ }
+ } while (ATA_INL(ch->r_mem, AHCI_P_CMD) & AHCI_P_CMD_FR);
+}
+
+static void
+ahci_start_fr(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ u_int32_t cmd;
+
+ /* Start FIS reception on this channel */
+ cmd = ATA_INL(ch->r_mem, AHCI_P_CMD);
+ ATA_OUTL(ch->r_mem, AHCI_P_CMD, cmd | AHCI_P_CMD_FRE);
+}
+
+static int
+ahci_wait_ready(device_t dev, int t)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ int timeout = 0;
+ uint32_t val;
+
+ while ((val = ATA_INL(ch->r_mem, AHCI_P_TFD)) &
+ (ATA_S_BUSY | ATA_S_DRQ)) {
+ DELAY(1000);
+ if (timeout++ > t) {
+ device_printf(dev, "port is not ready (timeout %dms) "
+ "tfd = %08x\n", t, val);
+ return (EBUSY);
+ }
+ }
+ if (bootverbose)
+ device_printf(dev, "ready wait time=%dms\n", timeout);
+ return (0);
+}
+
+static void
+ahci_reset(device_t dev)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ int i;
+
+ if (bootverbose)
+ device_printf(dev, "AHCI reset...\n");
+ xpt_freeze_simq(ch->sim, ch->numrslots);
+ /* Requeue freezed command. */
+ if (ch->frozen) {
+ union ccb *fccb = ch->frozen;
+ ch->frozen = NULL;
+ fccb->ccb_h.status = CAM_REQUEUE_REQ | CAM_RELEASE_SIMQ;
+ xpt_done(fccb);
+ }
+ /* Kill the engine and requeue all running commands. */
+ ahci_stop(dev);
+ for (i = 0; i < ch->numslots; i++) {
+ /* Do we have a running request on slot? */
+ if (ch->slot[i].state < AHCI_SLOT_RUNNING)
+ continue;
+ /* XXX; Commands in loading state. */
+ ahci_end_transaction(&ch->slot[i], AHCI_ERR_INNOCENT);
+ }
+ /* Disable port interrupts */
+ ATA_OUTL(ch->r_mem, AHCI_P_IE, 0);
+ /* Reset and reconnect PHY, */
+ if (!ahci_sata_phy_reset(dev, 0)) {
+ if (bootverbose)
+ device_printf(dev,
+ "AHCI reset done: phy reset found no device\n");
+ ch->devices = 0;
+ /* Enable wanted port interrupts */
+ ATA_OUTL(ch->r_mem, AHCI_P_IE,
+ (AHCI_P_IX_CPD | AHCI_P_IX_PRC | AHCI_P_IX_PC));
+ return;
+ }
+ /* Wait for clearing busy status. */
+ if (ahci_wait_ready(dev, 10000)) {
+ device_printf(dev, "device ready timeout\n");
+ ahci_clo(dev);
+ }
+ ahci_start(dev);
+ ch->devices = 1;
+ /* Enable wanted port interrupts */
+ ATA_OUTL(ch->r_mem, AHCI_P_IE,
+ (AHCI_P_IX_CPD | AHCI_P_IX_TFE | AHCI_P_IX_HBF |
+ AHCI_P_IX_HBD | AHCI_P_IX_IF | AHCI_P_IX_OF |
+ ((ch->pm_level == 0) ? AHCI_P_IX_PRC | AHCI_P_IX_PC : 0) |
+ AHCI_P_IX_DP | AHCI_P_IX_UF | AHCI_P_IX_SDB |
+ AHCI_P_IX_DS | AHCI_P_IX_PS | AHCI_P_IX_DHR));
+ if (bootverbose)
+ device_printf(dev, "AHCI reset done: devices=%08x\n", ch->devices);
+ /* Tell the XPT about the event */
+ xpt_async(AC_BUS_RESET, ch->path, NULL);
+}
+
+static int
+ahci_setup_fis(struct ahci_cmd_tab *ctp, union ccb *ccb, int tag)
+{
+ u_int8_t *fis = &ctp->cfis[0];
+
+ bzero(ctp->cfis, 64);
+ fis[0] = 0x27; /* host to device */
+ fis[1] = (ccb->ccb_h.target_id & 0x0f);
+ if (ccb->ccb_h.func_code == XPT_SCSI_IO) {
+ fis[1] |= 0x80;
+ fis[2] = ATA_PACKET_CMD;
+ if ((ccb->ccb_h.flags & CAM_DIR_MASK) != CAM_DIR_NONE)
+ fis[3] = ATA_F_DMA;
+ else {
+ fis[5] = ccb->csio.dxfer_len;
+ fis[6] = ccb->csio.dxfer_len >> 8;
+ }
+ fis[7] = ATA_D_LBA;
+ fis[15] = ATA_A_4BIT;
+ bzero(ctp->acmd, 32);
+ bcopy((ccb->ccb_h.flags & CAM_CDB_POINTER) ?
+ ccb->csio.cdb_io.cdb_ptr : ccb->csio.cdb_io.cdb_bytes,
+ ctp->acmd, ccb->csio.cdb_len);
+ } else if ((ccb->ataio.cmd.flags & CAM_ATAIO_CONTROL) == 0) {
+ fis[1] |= 0x80;
+ fis[2] = ccb->ataio.cmd.command;
+ fis[3] = ccb->ataio.cmd.features;
+ fis[4] = ccb->ataio.cmd.lba_low;
+ fis[5] = ccb->ataio.cmd.lba_mid;
+ fis[6] = ccb->ataio.cmd.lba_high;
+ fis[7] = ccb->ataio.cmd.device;
+ fis[8] = ccb->ataio.cmd.lba_low_exp;
+ fis[9] = ccb->ataio.cmd.lba_mid_exp;
+ fis[10] = ccb->ataio.cmd.lba_high_exp;
+ fis[11] = ccb->ataio.cmd.features_exp;
+ if (ccb->ataio.cmd.flags & CAM_ATAIO_FPDMA) {
+ fis[12] = tag << 3;
+ fis[13] = 0;
+ } else {
+ fis[12] = ccb->ataio.cmd.sector_count;
+ fis[13] = ccb->ataio.cmd.sector_count_exp;
+ }
+ fis[15] = ATA_A_4BIT;
+ } else {
+ fis[15] = ccb->ataio.cmd.control;
+ }
+ return (20);
+}
+
+static int
+ahci_sata_connect(struct ahci_channel *ch)
+{
+ u_int32_t status;
+ int timeout;
+
+ /* Wait up to 100ms for "connect well" */
+ for (timeout = 0; timeout < 100 ; timeout++) {
+ status = ATA_INL(ch->r_mem, AHCI_P_SSTS);
+ if (((status & ATA_SS_DET_MASK) == ATA_SS_DET_PHY_ONLINE) &&
+ ((status & ATA_SS_SPD_MASK) != ATA_SS_SPD_NO_SPEED) &&
+ ((status & ATA_SS_IPM_MASK) == ATA_SS_IPM_ACTIVE))
+ break;
+ DELAY(1000);
+ }
+ if (timeout >= 100) {
+ if (bootverbose) {
+ device_printf(ch->dev, "SATA connect timeout status=%08x\n",
+ status);
+ }
+ return (0);
+ }
+ if (bootverbose) {
+ device_printf(ch->dev, "SATA connect time=%dms status=%08x\n",
+ timeout, status);
+ }
+ /* Clear SATA error register */
+ ATA_OUTL(ch->r_mem, AHCI_P_SERR, 0xffffffff);
+ return (1);
+}
+
+static int
+ahci_sata_phy_reset(device_t dev, int quick)
+{
+ struct ahci_channel *ch = device_get_softc(dev);
+ uint32_t val;
+
+ if (quick) {
+ val = ATA_INL(ch->r_mem, AHCI_P_SCTL);
+ if ((val & ATA_SC_DET_MASK) == ATA_SC_DET_IDLE)
+ return (ahci_sata_connect(ch));
+ }
+
+ if (bootverbose)
+ device_printf(dev, "hardware reset ...\n");
+ ATA_OUTL(ch->r_mem, AHCI_P_SCTL, ATA_SC_IPM_DIS_PARTIAL |
+ ATA_SC_IPM_DIS_SLUMBER | ATA_SC_DET_RESET);
+ DELAY(50000);
+ if (ch->sata_rev == 1)
+ val = ATA_SC_SPD_SPEED_GEN1;
+ else if (ch->sata_rev == 2)
+ val = ATA_SC_SPD_SPEED_GEN2;
+ else if (ch->sata_rev == 3)
+ val = ATA_SC_SPD_SPEED_GEN3;
+ else
+ val = 0;
+ ATA_OUTL(ch->r_mem, AHCI_P_SCTL,
+ ATA_SC_DET_IDLE | val | ((ch->pm_level > 0) ? 0 :
+ (ATA_SC_IPM_DIS_PARTIAL | ATA_SC_IPM_DIS_SLUMBER)));
+ DELAY(50000);
+ return (ahci_sata_connect(ch));
+}
+
+static void
+ahciaction(struct cam_sim *sim, union ccb *ccb)
+{
+ device_t dev;
+ struct ahci_channel *ch;
+
+ CAM_DEBUG(ccb->ccb_h.path, CAM_DEBUG_TRACE, ("ahciaction func_code=%x\n",
+ ccb->ccb_h.func_code));
+
+ ch = (struct ahci_channel *)cam_sim_softc(sim);
+ dev = ch->dev;
+ switch (ccb->ccb_h.func_code) {
+ /* Common cases first */
+ case XPT_ATA_IO: /* Execute the requested I/O operation */
+ case XPT_SCSI_IO:
+ if (ch->devices == 0) {
+ ccb->ccb_h.status = CAM_SEL_TIMEOUT;
+ xpt_done(ccb);
+ break;
+ }
+ /* Check for command collision. */
+ if (ahci_check_collision(dev, ccb)) {
+ /* Freeze command. */
+ ch->frozen = ccb;
+ /* We have only one frozen slot, so freeze simq also. */
+ xpt_freeze_simq(ch->sim, 1);
+ return;
+ }
+ ahci_begin_transaction(dev, ccb);
+ break;
+ case XPT_EN_LUN: /* Enable LUN as a target */
+ case XPT_TARGET_IO: /* Execute target I/O request */
+ case XPT_ACCEPT_TARGET_IO: /* Accept Host Target Mode CDB */
+ case XPT_CONT_TARGET_IO: /* Continue Host Target I/O Connection*/
+ case XPT_ABORT: /* Abort the specified CCB */
+ /* XXX Implement */
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ case XPT_SET_TRAN_SETTINGS:
+ {
+ struct ccb_trans_settings *cts = &ccb->cts;
+
+ if (cts->xport_specific.sata.valid & CTS_SATA_VALID_PM) {
+ ch->pm_present = cts->xport_specific.sata.pm_present;
+ }
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ }
+ case XPT_GET_TRAN_SETTINGS:
+ /* Get default/user set transfer settings for the target */
+ {
+ struct ccb_trans_settings *cts = &ccb->cts;
+ uint32_t status;
+
+ cts->protocol = PROTO_ATA;
+ cts->protocol_version = SCSI_REV_2;
+ cts->transport = XPORT_SATA;
+ cts->transport_version = 2;
+ cts->proto_specific.valid = 0;
+ cts->xport_specific.sata.valid = 0;
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS)
+ status = ATA_INL(ch->r_mem, AHCI_P_SSTS) & ATA_SS_SPD_MASK;
+ else
+ status = ATA_INL(ch->r_mem, AHCI_P_SCTL) & ATA_SC_SPD_MASK;
+ if (status & ATA_SS_SPD_GEN3) {
+ cts->xport_specific.sata.bitrate = 600000;
+ cts->xport_specific.sata.valid |= CTS_SATA_VALID_SPEED;
+ } else if (status & ATA_SS_SPD_GEN2) {
+ cts->xport_specific.sata.bitrate = 300000;
+ cts->xport_specific.sata.valid |= CTS_SATA_VALID_SPEED;
+ } else if (status & ATA_SS_SPD_GEN1) {
+ cts->xport_specific.sata.bitrate = 150000;
+ cts->xport_specific.sata.valid |= CTS_SATA_VALID_SPEED;
+ }
+ if (cts->type == CTS_TYPE_CURRENT_SETTINGS) {
+ cts->xport_specific.sata.pm_present =
+ (ATA_INL(ch->r_mem, AHCI_P_CMD) & AHCI_P_CMD_PMA) ?
+ 1 : 0;
+ } else {
+ cts->xport_specific.sata.pm_present = ch->pm_present;
+ }
+ cts->xport_specific.sata.valid |= CTS_SATA_VALID_PM;
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ }
+#if 0
+ case XPT_CALC_GEOMETRY:
+ {
+ struct ccb_calc_geometry *ccg;
+ uint32_t size_mb;
+ uint32_t secs_per_cylinder;
+
+ ccg = &ccb->ccg;
+ size_mb = ccg->volume_size
+ / ((1024L * 1024L) / ccg->block_size);
+ if (size_mb >= 1024 && (aha->extended_trans != 0)) {
+ if (size_mb >= 2048) {
+ ccg->heads = 255;
+ ccg->secs_per_track = 63;
+ } else {
+ ccg->heads = 128;
+ ccg->secs_per_track = 32;
+ }
+ } else {
+ ccg->heads = 64;
+ ccg->secs_per_track = 32;
+ }
+ secs_per_cylinder = ccg->heads * ccg->secs_per_track;
+ ccg->cylinders = ccg->volume_size / secs_per_cylinder;
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ }
+#endif
+ case XPT_RESET_BUS: /* Reset the specified SCSI bus */
+ case XPT_RESET_DEV: /* Bus Device Reset the specified SCSI device */
+ ahci_reset(dev);
+ ccb->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ case XPT_TERM_IO: /* Terminate the I/O process */
+ /* XXX Implement */
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ case XPT_PATH_INQ: /* Path routing inquiry */
+ {
+ struct ccb_pathinq *cpi = &ccb->cpi;
+
+ cpi->version_num = 1; /* XXX??? */
+ cpi->hba_inquiry = PI_SDTR_ABLE | PI_TAG_ABLE;
+ if (ch->caps & AHCI_CAP_SPM)
+ cpi->hba_inquiry |= PI_SATAPM;
+ cpi->target_sprt = 0;
+ cpi->hba_misc = PIM_SEQSCAN;
+ cpi->hba_eng_cnt = 0;
+ if (ch->caps & AHCI_CAP_SPM)
+ cpi->max_target = 14;
+ else
+ cpi->max_target = 0;
+ cpi->max_lun = 0;
+ cpi->initiator_id = 0;
+ cpi->bus_id = cam_sim_bus(sim);
+ cpi->base_transfer_speed = 150000;
+ strncpy(cpi->sim_vid, "FreeBSD", SIM_IDLEN);
+ strncpy(cpi->hba_vid, "AHCI", HBA_IDLEN);
+ strncpy(cpi->dev_name, cam_sim_name(sim), DEV_IDLEN);
+ cpi->unit_number = cam_sim_unit(sim);
+ cpi->transport = XPORT_SATA;
+ cpi->transport_version = 2;
+ cpi->protocol = PROTO_ATA;
+ cpi->protocol_version = SCSI_REV_2;
+ cpi->maxio = MAXPHYS;
+ cpi->ccb_h.status = CAM_REQ_CMP;
+ xpt_done(ccb);
+ break;
+ }
+ default:
+ ccb->ccb_h.status = CAM_REQ_INVALID;
+ xpt_done(ccb);
+ break;
+ }
+}
+
+static void
+ahcipoll(struct cam_sim *sim)
+{
+ struct ahci_channel *ch = (struct ahci_channel *)cam_sim_softc(sim);
+
+ ahci_ch_intr(ch->dev);
+}
diff --git a/sys/dev/ahci/ahci.h b/sys/dev/ahci/ahci.h
new file mode 100644
index 000000000000..dadbd84f2556
--- /dev/null
+++ b/sys/dev/ahci/ahci.h
@@ -0,0 +1,422 @@
+/*-
+ * Copyright (c) 1998 - 2008 Søren Schmidt <sos@FreeBSD.org>
+ * Copyright (c) 2009 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer,
+ * without modification, immediately at the beginning of the file.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * $FreeBSD$
+ */
+
+/* ATA register defines */
+#define ATA_DATA 0 /* (RW) data */
+
+#define ATA_FEATURE 1 /* (W) feature */
+#define ATA_F_DMA 0x01 /* enable DMA */
+#define ATA_F_OVL 0x02 /* enable overlap */
+
+#define ATA_COUNT 2 /* (W) sector count */
+
+#define ATA_SECTOR 3 /* (RW) sector # */
+#define ATA_CYL_LSB 4 /* (RW) cylinder# LSB */
+#define ATA_CYL_MSB 5 /* (RW) cylinder# MSB */
+#define ATA_DRIVE 6 /* (W) Sector/Drive/Head */
+#define ATA_D_LBA 0x40 /* use LBA addressing */
+#define ATA_D_IBM 0xa0 /* 512 byte sectors, ECC */
+
+#define ATA_COMMAND 7 /* (W) command */
+
+#define ATA_ERROR 8 /* (R) error */
+#define ATA_E_ILI 0x01 /* illegal length */
+#define ATA_E_NM 0x02 /* no media */
+#define ATA_E_ABORT 0x04 /* command aborted */
+#define ATA_E_MCR 0x08 /* media change request */
+#define ATA_E_IDNF 0x10 /* ID not found */
+#define ATA_E_MC 0x20 /* media changed */
+#define ATA_E_UNC 0x40 /* uncorrectable data */
+#define ATA_E_ICRC 0x80 /* UDMA crc error */
+#define ATA_E_ATAPI_SENSE_MASK 0xf0 /* ATAPI sense key mask */
+
+#define ATA_IREASON 9 /* (R) interrupt reason */
+#define ATA_I_CMD 0x01 /* cmd (1) | data (0) */
+#define ATA_I_IN 0x02 /* read (1) | write (0) */
+#define ATA_I_RELEASE 0x04 /* released bus (1) */
+#define ATA_I_TAGMASK 0xf8 /* tag mask */
+
+#define ATA_STATUS 10 /* (R) status */
+#define ATA_ALTSTAT 11 /* (R) alternate status */
+#define ATA_S_ERROR 0x01 /* error */
+#define ATA_S_INDEX 0x02 /* index */
+#define ATA_S_CORR 0x04 /* data corrected */
+#define ATA_S_DRQ 0x08 /* data request */
+#define ATA_S_DSC 0x10 /* drive seek completed */
+#define ATA_S_SERVICE 0x10 /* drive needs service */
+#define ATA_S_DWF 0x20 /* drive write fault */
+#define ATA_S_DMA 0x20 /* DMA ready */
+#define ATA_S_READY 0x40 /* drive ready */
+#define ATA_S_BUSY 0x80 /* busy */
+
+#define ATA_CONTROL 12 /* (W) control */
+#define ATA_A_IDS 0x02 /* disable interrupts */
+#define ATA_A_RESET 0x04 /* RESET controller */
+#define ATA_A_4BIT 0x08 /* 4 head bits */
+#define ATA_A_HOB 0x80 /* High Order Byte enable */
+
+/* SATA register defines */
+#define ATA_SSTATUS 13
+#define ATA_SS_DET_MASK 0x0000000f
+#define ATA_SS_DET_NO_DEVICE 0x00000000
+#define ATA_SS_DET_DEV_PRESENT 0x00000001
+#define ATA_SS_DET_PHY_ONLINE 0x00000003
+#define ATA_SS_DET_PHY_OFFLINE 0x00000004
+
+#define ATA_SS_SPD_MASK 0x000000f0
+#define ATA_SS_SPD_NO_SPEED 0x00000000
+#define ATA_SS_SPD_GEN1 0x00000010
+#define ATA_SS_SPD_GEN2 0x00000020
+#define ATA_SS_SPD_GEN3 0x00000040
+
+#define ATA_SS_IPM_MASK 0x00000f00
+#define ATA_SS_IPM_NO_DEVICE 0x00000000
+#define ATA_SS_IPM_ACTIVE 0x00000100
+#define ATA_SS_IPM_PARTIAL 0x00000200
+#define ATA_SS_IPM_SLUMBER 0x00000600
+
+#define ATA_SERROR 14
+#define ATA_SE_DATA_CORRECTED 0x00000001
+#define ATA_SE_COMM_CORRECTED 0x00000002
+#define ATA_SE_DATA_ERR 0x00000100
+#define ATA_SE_COMM_ERR 0x00000200
+#define ATA_SE_PROT_ERR 0x00000400
+#define ATA_SE_HOST_ERR 0x00000800
+#define ATA_SE_PHY_CHANGED 0x00010000
+#define ATA_SE_PHY_IERROR 0x00020000
+#define ATA_SE_COMM_WAKE 0x00040000
+#define ATA_SE_DECODE_ERR 0x00080000
+#define ATA_SE_PARITY_ERR 0x00100000
+#define ATA_SE_CRC_ERR 0x00200000
+#define ATA_SE_HANDSHAKE_ERR 0x00400000
+#define ATA_SE_LINKSEQ_ERR 0x00800000
+#define ATA_SE_TRANSPORT_ERR 0x01000000
+#define ATA_SE_UNKNOWN_FIS 0x02000000
+
+#define ATA_SCONTROL 15
+#define ATA_SC_DET_MASK 0x0000000f
+#define ATA_SC_DET_IDLE 0x00000000
+#define ATA_SC_DET_RESET 0x00000001
+#define ATA_SC_DET_DISABLE 0x00000004
+
+#define ATA_SC_SPD_MASK 0x000000f0
+#define ATA_SC_SPD_NO_SPEED 0x00000000
+#define ATA_SC_SPD_SPEED_GEN1 0x00000010
+#define ATA_SC_SPD_SPEED_GEN2 0x00000020
+#define ATA_SC_SPD_SPEED_GEN3 0x00000040
+
+#define ATA_SC_IPM_MASK 0x00000f00
+#define ATA_SC_IPM_NONE 0x00000000
+#define ATA_SC_IPM_DIS_PARTIAL 0x00000100
+#define ATA_SC_IPM_DIS_SLUMBER 0x00000200
+
+#define ATA_SACTIVE 16
+
+#define AHCI_MAX_PORTS 32
+#define AHCI_MAX_SLOTS 32
+
+/* SATA AHCI v1.0 register defines */
+#define AHCI_CAP 0x00
+#define AHCI_CAP_NPMASK 0x0000001f
+#define AHCI_CAP_SXS 0x00000020
+#define AHCI_CAP_EMS 0x00000040
+#define AHCI_CAP_CCCS 0x00000080
+#define AHCI_CAP_NCS 0x00001F00
+#define AHCI_CAP_NCS_SHIFT 8
+#define AHCI_CAP_PSC 0x00002000
+#define AHCI_CAP_SSC 0x00004000
+#define AHCI_CAP_PMD 0x00008000
+#define AHCI_CAP_FBSS 0x00010000
+#define AHCI_CAP_SPM 0x00020000
+#define AHCI_CAP_SAM 0x00080000
+#define AHCI_CAP_ISS 0x00F00000
+#define AHCI_CAP_ISS_SHIFT 20
+#define AHCI_CAP_SCLO 0x01000000
+#define AHCI_CAP_SAL 0x02000000
+#define AHCI_CAP_SALP 0x04000000
+#define AHCI_CAP_SSS 0x08000000
+#define AHCI_CAP_SMPS 0x10000000
+#define AHCI_CAP_SSNTF 0x20000000
+#define AHCI_CAP_SNCQ 0x40000000
+#define AHCI_CAP_64BIT 0x80000000
+
+#define AHCI_GHC 0x04
+#define AHCI_GHC_AE 0x80000000
+#define AHCI_GHC_MRSM 0x00000004
+#define AHCI_GHC_IE 0x00000002
+#define AHCI_GHC_HR 0x00000001
+
+#define AHCI_IS 0x08
+#define AHCI_PI 0x0c
+#define AHCI_VS 0x10
+
+#define AHCI_OFFSET 0x100
+#define AHCI_STEP 0x80
+
+#define AHCI_P_CLB 0x00
+#define AHCI_P_CLBU 0x04
+#define AHCI_P_FB 0x08
+#define AHCI_P_FBU 0x0c
+#define AHCI_P_IS 0x10
+#define AHCI_P_IE 0x14
+#define AHCI_P_IX_DHR 0x00000001
+#define AHCI_P_IX_PS 0x00000002
+#define AHCI_P_IX_DS 0x00000004
+#define AHCI_P_IX_SDB 0x00000008
+#define AHCI_P_IX_UF 0x00000010
+#define AHCI_P_IX_DP 0x00000020
+#define AHCI_P_IX_PC 0x00000040
+#define AHCI_P_IX_DI 0x00000080
+
+#define AHCI_P_IX_PRC 0x00400000
+#define AHCI_P_IX_IPM 0x00800000
+#define AHCI_P_IX_OF 0x01000000
+#define AHCI_P_IX_INF 0x04000000
+#define AHCI_P_IX_IF 0x08000000
+#define AHCI_P_IX_HBD 0x10000000
+#define AHCI_P_IX_HBF 0x20000000
+#define AHCI_P_IX_TFE 0x40000000
+#define AHCI_P_IX_CPD 0x80000000
+
+#define AHCI_P_CMD 0x18
+#define AHCI_P_CMD_ST 0x00000001
+#define AHCI_P_CMD_SUD 0x00000002
+#define AHCI_P_CMD_POD 0x00000004
+#define AHCI_P_CMD_CLO 0x00000008
+#define AHCI_P_CMD_FRE 0x00000010
+#define AHCI_P_CMD_CCS_MASK 0x00001f00
+#define AHCI_P_CMD_CCS_SHIFT 8
+#define AHCI_P_CMD_ISS 0x00002000
+#define AHCI_P_CMD_FR 0x00004000
+#define AHCI_P_CMD_CR 0x00008000
+#define AHCI_P_CMD_CPS 0x00010000
+#define AHCI_P_CMD_PMA 0x00020000
+#define AHCI_P_CMD_HPCP 0x00040000
+#define AHCI_P_CMD_ISP 0x00080000
+#define AHCI_P_CMD_CPD 0x00100000
+#define AHCI_P_CMD_ATAPI 0x01000000
+#define AHCI_P_CMD_DLAE 0x02000000
+#define AHCI_P_CMD_ALPE 0x04000000
+#define AHCI_P_CMD_ASP 0x08000000
+#define AHCI_P_CMD_ICC_MASK 0xf0000000
+#define AHCI_P_CMD_NOOP 0x00000000
+#define AHCI_P_CMD_ACTIVE 0x10000000
+#define AHCI_P_CMD_PARTIAL 0x20000000
+#define AHCI_P_CMD_SLUMBER 0x60000000
+
+#define AHCI_P_TFD 0x20
+#define AHCI_P_SIG 0x24
+#define AHCI_P_SSTS 0x28
+#define AHCI_P_SCTL 0x2c
+#define AHCI_P_SERR 0x30
+#define AHCI_P_SACT 0x34
+#define AHCI_P_CI 0x38
+#define AHCI_P_SNTF 0x3C
+#define AHCI_P_FBS 0x40
+
+/* Just to be sure, if building as module. */
+#if MAXPHYS < 512 * 1024
+#undef MAXPHYS
+#define MAXPHYS 512 * 1024
+#endif
+/* Pessimistic prognosis on number of required S/G entries */
+#define AHCI_SG_ENTRIES (roundup(btoc(MAXPHYS) + 1, 8))
+/* Command list. 32 commands. First, 1Kbyte aligned. */
+#define AHCI_CL_OFFSET 0
+#define AHCI_CL_SIZE 32
+/* Command tables. Up to 32 commands, Each, 128byte aligned. */
+#define AHCI_CT_OFFSET (AHCI_CL_OFFSET + AHCI_CL_SIZE * AHCI_MAX_SLOTS)
+#define AHCI_CT_SIZE (128 + AHCI_SG_ENTRIES * 16)
+/* Total main work area. */
+#define AHCI_WORK_SIZE (AHCI_CT_OFFSET + AHCI_CT_SIZE * ch->numslots)
+
+struct ahci_dma_prd {
+ u_int64_t dba;
+ u_int32_t reserved;
+ u_int32_t dbc; /* 0 based */
+#define AHCI_PRD_MASK 0x003fffff /* max 4MB */
+#define AHCI_PRD_MAX (AHCI_PRD_MASK + 1)
+#define AHCI_PRD_IPC (1 << 31)
+} __packed;
+
+struct ahci_cmd_tab {
+ u_int8_t cfis[64];
+ u_int8_t acmd[32];
+ u_int8_t reserved[32];
+ struct ahci_dma_prd prd_tab[AHCI_SG_ENTRIES];
+} __packed;
+
+struct ahci_cmd_list {
+ u_int16_t cmd_flags;
+#define AHCI_CMD_ATAPI 0x0020
+#define AHCI_CMD_WRITE 0x0040
+#define AHCI_CMD_PREFETCH 0x0080
+#define AHCI_CMD_RESET 0x0100
+#define AHCI_CMD_BIST 0x0200
+#define AHCI_CMD_CLR_BUSY 0x0400
+
+ u_int16_t prd_length; /* PRD entries */
+ u_int32_t bytecount;
+ u_int64_t cmd_table_phys; /* 128byte aligned */
+} __packed;
+
+/* misc defines */
+#define ATA_IRQ_RID 0
+#define ATA_INTR_FLAGS (INTR_MPSAFE|INTR_TYPE_BIO|INTR_ENTROPY)
+
+struct ata_dmaslot {
+ bus_dmamap_t data_map; /* data DMA map */
+ int nsegs; /* Number of segs loaded */
+};
+
+/* structure holding DMA related information */
+struct ata_dma {
+ bus_dma_tag_t work_tag; /* workspace DMA tag */
+ bus_dmamap_t work_map; /* workspace DMA map */
+ uint8_t *work; /* workspace */
+ bus_addr_t work_bus; /* bus address of work */
+ bus_dma_tag_t rfis_tag; /* RFIS list DMA tag */
+ bus_dmamap_t rfis_map; /* RFIS list DMA map */
+ uint8_t *rfis; /* FIS receive area */
+ bus_addr_t rfis_bus; /* bus address of rfis */
+ bus_dma_tag_t data_tag; /* data DMA tag */
+ u_int64_t max_address; /* highest DMA'able address */
+};
+
+enum ahci_slot_states {
+ AHCI_SLOT_EMPTY,
+ AHCI_SLOT_LOADING,
+ AHCI_SLOT_RUNNING,
+ AHCI_SLOT_WAITING
+};
+
+struct ahci_slot {
+ device_t dev; /* Device handle */
+ u_int8_t slot; /* Number of this slot */
+ enum ahci_slot_states state; /* Slot state */
+ union ccb *ccb; /* CCB occupying slot */
+ struct ata_dmaslot dma; /* DMA data of this slot */
+ struct callout timeout; /* Execution timeout */
+};
+
+/* structure describing an ATA channel */
+struct ahci_channel {
+ device_t dev; /* Device handle */
+ int unit; /* Physical channel */
+ struct resource *r_mem; /* Memory of this channel */
+ struct resource *r_irq; /* Interrupt of this channel */
+ void *ih; /* Interrupt handle */
+ struct ata_dma dma; /* DMA data */
+ struct cam_sim *sim;
+ struct cam_path *path;
+ uint32_t caps; /* Controller capabilities */
+ int numslots; /* Number of present slots */
+ int pm_level; /* power management level */
+ int sata_rev; /* Maximum allowed SATA generation */
+
+ struct ahci_slot slot[AHCI_MAX_SLOTS];
+ union ccb *hold[AHCI_MAX_SLOTS];
+ struct mtx mtx; /* state lock */
+ int devices; /* What is present */
+ int pm_present; /* PM presence reported */
+ uint32_t rslots; /* Running slots */
+ uint32_t aslots; /* Slots with atomic commands */
+ int numrslots; /* Number of running slots */
+ int numtslots; /* Number of tagged slots */
+ int readlog; /* Our READ LOG active */
+ int lastslot; /* Last used slot */
+ int taggedtarget; /* Last tagged target */
+ union ccb *frozen; /* Frozen command */
+};
+
+/* structure describing a AHCI controller */
+struct ahci_controller {
+ device_t dev;
+ int r_rid;
+ struct resource *r_mem;
+ struct rman sc_iomem;
+ struct ahci_controller_irq {
+ struct ahci_controller *ctlr;
+ struct resource *r_irq;
+ void *handle;
+ int r_irq_rid;
+ int mode;
+#define AHCI_IRQ_MODE_ALL 0
+#define AHCI_IRQ_MODE_AFTER 1
+#define AHCI_IRQ_MODE_ONE 2
+ } irqs[16];
+ int numirqs;
+ int channels;
+ int ichannels;
+ struct {
+ void (*function)(void *);
+ void *argument;
+ } interrupt[AHCI_MAX_PORTS];
+};
+
+enum ahci_err_type {
+ AHCI_ERR_NONE, /* No error */
+ AHCI_ERR_INVALID, /* Error detected by us before submitting. */
+ AHCI_ERR_INNOCENT, /* Innocent victim. */
+ AHCI_ERR_TFE, /* Task File Error. */
+ AHCI_ERR_SATA, /* SATA error. */
+ AHCI_ERR_TIMEOUT, /* Command execution timeout. */
+ AHCI_ERR_NCQ, /* NCQ command error. CCB should be put on hold
+ * until READ LOG executed to reveal error. */
+};
+
+/* macros to hide busspace uglyness */
+#define ATA_INB(res, offset) \
+ bus_read_1((res), (offset))
+#define ATA_INW(res, offset) \
+ bus_read_2((res), (offset))
+#define ATA_INL(res, offset) \
+ bus_read_4((res), (offset))
+#define ATA_INSW(res, offset, addr, count) \
+ bus_read_multi_2((res), (offset), (addr), (count))
+#define ATA_INSW_STRM(res, offset, addr, count) \
+ bus_read_multi_stream_2((res), (offset), (addr), (count))
+#define ATA_INSL(res, offset, addr, count) \
+ bus_read_multi_4((res), (offset), (addr), (count))
+#define ATA_INSL_STRM(res, offset, addr, count) \
+ bus_read_multi_stream_4((res), (offset), (addr), (count))
+#define ATA_OUTB(res, offset, value) \
+ bus_write_1((res), (offset), (value))
+#define ATA_OUTW(res, offset, value) \
+ bus_write_2((res), (offset), (value))
+#define ATA_OUTL(res, offset, value) \
+ bus_write_4((res), (offset), (value))
+#define ATA_OUTSW(res, offset, addr, count) \
+ bus_write_multi_2((res), (offset), (addr), (count))
+#define ATA_OUTSW_STRM(res, offset, addr, count) \
+ bus_write_multi_stream_2((res), (offset), (addr), (count))
+#define ATA_OUTSL(res, offset, addr, count) \
+ bus_write_multi_4((res), (offset), (addr), (count))
+#define ATA_OUTSL_STRM(res, offset, addr, count) \
+ bus_write_multi_stream_4((res), (offset), (addr), (count))
diff --git a/sys/dev/aic7xxx/aic79xx_osm.h b/sys/dev/aic7xxx/aic79xx_osm.h
index 3c97a63d3e0a..b786cec2baf4 100644
--- a/sys/dev/aic7xxx/aic79xx_osm.h
+++ b/sys/dev/aic7xxx/aic79xx_osm.h
@@ -102,7 +102,8 @@
* The number of dma segments supported. The sequencer can handle any number
* of physically contiguous S/G entrys. To reduce the driver's memory
* consumption, we limit the number supported to be sufficient to handle
- * the largest mapping supported by the kernel, MAXPHYS. Assuming the
+ * the largest mapping supported by the the legacy kernel MAXPHYS setting of
+ * 128K. This can be increased once some testing is done. Assuming the
* transfer is as fragmented as possible and unaligned, this turns out to
* be the number of paged sized transfers in MAXPHYS plus an extra element
* to handle any unaligned residual. The sequencer fetches SG elements
@@ -110,7 +111,8 @@
* multiple of 16 which should align us on even the largest of cacheline
* boundaries.
*/
-#define AHD_NSEG (roundup(btoc(MAXPHYS) + 1, 16))
+#define AHD_MAXPHYS (128 * 1024)
+#define AHD_NSEG (roundup(btoc(AHD_MAXPHYS) + 1, 16))
/* This driver supports target mode */
#ifdef NOT_YET
diff --git a/sys/dev/aic7xxx/aic7xxx_osm.h b/sys/dev/aic7xxx/aic7xxx_osm.h
index d1059714be0a..388cf9e84fb8 100644
--- a/sys/dev/aic7xxx/aic7xxx_osm.h
+++ b/sys/dev/aic7xxx/aic7xxx_osm.h
@@ -115,15 +115,16 @@ extern devclass_t ahc_devclass;
* The number of dma segments supported. The sequencer can handle any number
* of physically contiguous S/G entrys. To reduce the driver's memory
* consumption, we limit the number supported to be sufficient to handle
- * the largest mapping supported by the kernel, MAXPHYS. Assuming the
- * transfer is as fragmented as possible and unaligned, this turns out to
+ * the largest mapping supported by the the legacy kernel MAXPHYS setting of
+ * 128K. This can be increased once some testing is done. Assuming the
* be the number of paged sized transfers in MAXPHYS plus an extra element
* to handle any unaligned residual. The sequencer fetches SG elements
* in cacheline sized chucks, so make the number per-transaction an even
* multiple of 16 which should align us on even the largest of cacheline
* boundaries.
*/
-#define AHC_NSEG (roundup(btoc(MAXPHYS) + 1, 16))
+#define AHC_MAXPHYS (128 * 1024)
+#define AHC_NSEG (roundup(btoc(AHC_MAXPHYS) + 1, 16))
/* This driver supports target mode */
#define AHC_TARGET_MODE 1
diff --git a/sys/dev/amd/amd.h b/sys/dev/amd/amd.h
index d9b8cd209b34..c671f222467d 100644
--- a/sys/dev/amd/amd.h
+++ b/sys/dev/amd/amd.h
@@ -95,7 +95,8 @@ struct amd_sg {
#define AMD_MAX_SYNC_OFFSET 15
#define AMD_TARGET_MAX 7
#define AMD_LUN_MAX 7
-#define AMD_NSEG (btoc(MAXPHYS) + 1)
+#define AMD_MAXPHYS (128 * 1024) /* legacy MAXPHYS */
+#define AMD_NSEG (btoc(AMD_MAXPHYS) + 1)
#define AMD_MAXTRANSFER_SIZE 0xFFFFFF /* restricted by 24 bit counter */
#define MAX_DEVICES 10
#define MAX_TAGS_CMD_QUEUE 256
diff --git a/sys/dev/ata/atapi-cam.c b/sys/dev/ata/atapi-cam.c
index 50aaf7877c77..cf90c49f7c33 100644
--- a/sys/dev/ata/atapi-cam.c
+++ b/sys/dev/ata/atapi-cam.c
@@ -376,7 +376,7 @@ atapi_action(struct cam_sim *sim, union ccb *ccb)
cpi->unit_number = cam_sim_unit(sim);
cpi->bus_id = cam_sim_bus(sim);
cpi->base_transfer_speed = 3300;
- cpi->transport = XPORT_ATA;
+ cpi->transport = XPORT_SPI;
cpi->transport_version = 2;
cpi->protocol = PROTO_SCSI;
cpi->protocol_version = SCSI_REV_2;
@@ -418,6 +418,8 @@ atapi_action(struct cam_sim *sim, union ccb *ccb)
break;
}
}
+ cpi->maxio = softc->ata_ch->dma.max_iosize ?
+ softc->ata_ch->dma.max_iosize : DFLTPHYS;
ccb->ccb_h.status = CAM_REQ_CMP;
xpt_done(ccb);
return;
@@ -456,7 +458,7 @@ atapi_action(struct cam_sim *sim, union ccb *ccb)
struct ccb_trans_settings *cts = &ccb->cts;
cts->protocol = PROTO_SCSI;
cts->protocol_version = SCSI_REV_2;
- cts->transport = XPORT_ATA;
+ cts->transport = XPORT_SPI;
cts->transport_version = XPORT_VERSION_UNSPECIFIED;
cts->proto_specific.valid = 0;
cts->xport_specific.valid = 0;
@@ -666,13 +668,11 @@ action_oom:
xpt_freeze_simq(sim, /*count*/ 1);
ccb_h->status = CAM_REQUEUE_REQ;
xpt_done(ccb);
- mtx_unlock(&softc->state_lock);
return;
action_invalid:
ccb_h->status = CAM_REQ_INVALID;
xpt_done(ccb);
- mtx_unlock(&softc->state_lock);
return;
}
diff --git a/sys/dev/ciss/ciss.c b/sys/dev/ciss/ciss.c
index 93aea725299c..c028905137ad 100644
--- a/sys/dev/ciss/ciss.c
+++ b/sys/dev/ciss/ciss.c
@@ -2976,6 +2976,7 @@ ciss_cam_action(struct cam_sim *sim, union ccb *ccb)
cpi->transport_version = 2;
cpi->protocol = PROTO_SCSI;
cpi->protocol_version = SCSI_REV_2;
+ cpi->maxio = (CISS_MAX_SG_ELEMENTS - 1) * PAGE_SIZE;
ccb->ccb_h.status = CAM_REQ_CMP;
break;
}
diff --git a/sys/dev/ciss/cissvar.h b/sys/dev/ciss/cissvar.h
index a3df3c2d12c2..78c7cc8ede93 100644
--- a/sys/dev/ciss/cissvar.h
+++ b/sys/dev/ciss/cissvar.h
@@ -141,6 +141,9 @@ struct ciss_request
#define CISS_COMMAND_SG_LENGTH ((CISS_COMMAND_ALLOC_SIZE - sizeof(struct ciss_command)) \
/ sizeof(struct ciss_sg_entry))
+/* XXX Prep for increasing max i/o */
+#define CISS_MAX_SG_ELEMENTS 33
+
/*
* Per-logical-drive data.
*/
diff --git a/sys/dev/isp/isp_freebsd.h b/sys/dev/isp/isp_freebsd.h
index b18f0dc9abcc..e8c36ca7a974 100644
--- a/sys/dev/isp/isp_freebsd.h
+++ b/sys/dev/isp/isp_freebsd.h
@@ -561,7 +561,8 @@ void isp_common_dmateardown(ispsoftc_t *, struct ccb_scsiio *, uint32_t);
#endif
/* Should be BUS_SPACE_MAXSIZE, but MAXPHYS is larger than BUS_SPACE_MAXSIZE */
-#define ISP_NSEGS ((MAXPHYS / PAGE_SIZE) + 1)
+#define ISP_MAXPHYS (128 * 1024)
+#define ISP_NSEGS ((ISP_MAXPHYS / PAGE_SIZE) + 1)
/*
* Platform specific inline functions
diff --git a/sys/dev/mfi/mfi.c b/sys/dev/mfi/mfi.c
index aaec669b5081..eb18ffe4ce35 100644
--- a/sys/dev/mfi/mfi.c
+++ b/sys/dev/mfi/mfi.c
@@ -341,7 +341,7 @@ mfi_attach(struct mfi_softc *sc)
status = sc->mfi_read_fw_status(sc);
sc->mfi_max_fw_cmds = status & MFI_FWSTATE_MAXCMD_MASK;
max_fw_sge = (status & MFI_FWSTATE_MAXSGL_MASK) >> 16;
- sc->mfi_max_sge = min(max_fw_sge, ((MAXPHYS / PAGE_SIZE) + 1));
+ sc->mfi_max_sge = min(max_fw_sge, ((MFI_MAXPHYS / PAGE_SIZE) + 1));
/*
* Create the dma tag for data buffers. Used both for block I/O
diff --git a/sys/dev/mfi/mfivar.h b/sys/dev/mfi/mfivar.h
index 2738b73506ab..9ddb62791312 100644
--- a/sys/dev/mfi/mfivar.h
+++ b/sys/dev/mfi/mfivar.h
@@ -379,6 +379,7 @@ mfi_print_sense(struct mfi_softc *sc, void *sense)
MALLOC_DECLARE(M_MFIBUF);
#define MFI_CMD_TIMEOUT 30
+#define MFI_MAXPHYS (128 * 1024)
#ifdef MFI_DEBUG
extern void mfi_print_cmd(struct mfi_command *cm);
diff --git a/sys/dev/mlx/mlx.c b/sys/dev/mlx/mlx.c
index dc6a4baa3cfe..6087216901fd 100644
--- a/sys/dev/mlx/mlx.c
+++ b/sys/dev/mlx/mlx.c
@@ -1979,7 +1979,7 @@ mlx_user_command(struct mlx_softc *sc, struct mlx_usercommand *mu)
* initial contents
*/
if (mu->mu_datasize > 0) {
- if (mu->mu_datasize > MAXPHYS) {
+ if (mu->mu_datasize > MLX_MAXPHYS) {
error = EINVAL;
goto out;
}
diff --git a/sys/dev/mlx/mlxvar.h b/sys/dev/mlx/mlxvar.h
index 345cf9933c22..a09f9075ac32 100644
--- a/sys/dev/mlx/mlxvar.h
+++ b/sys/dev/mlx/mlxvar.h
@@ -47,6 +47,7 @@
* making that fit cleanly without crossing page boundaries requires rounding up
* to the next power of two.
*/
+#define MLX_MAXPHYS (128 * 124)
#define MLX_NSEG 64
#define MLX_NSLOTS 256 /* max number of command slots */
diff --git a/sys/dev/mpt/mpt.h b/sys/dev/mpt/mpt.h
index 6817c4730cd6..0a093efabd12 100644
--- a/sys/dev/mpt/mpt.h
+++ b/sys/dev/mpt/mpt.h
@@ -986,6 +986,9 @@ mpt_pio_read(struct mpt_softc *mpt, int offset)
/* Max MPT Reply we are willing to accept (must be power of 2) */
#define MPT_REPLY_SIZE 256
+/* Max i/o size, based on legacy MAXPHYS. Can be increased. */
+#define MPT_MAXPHYS (128 * 1024)
+
/*
* Must be less than 16384 in order for target mode to work
*/
diff --git a/sys/dev/mpt/mpt_pci.c b/sys/dev/mpt/mpt_pci.c
index deb905ee37bb..c6c59b7c9ccf 100644
--- a/sys/dev/mpt/mpt_pci.c
+++ b/sys/dev/mpt/mpt_pci.c
@@ -795,9 +795,9 @@ mpt_dma_mem_alloc(struct mpt_softc *mpt)
/*
* XXX: we should say that nsegs is 'unrestricted, but that
* XXX: tickles a horrible bug in the busdma code. Instead,
- * XXX: we'll derive a reasonable segment limit from MAXPHYS
+ * XXX: we'll derive a reasonable segment limit from MPT_MAXPHYS
*/
- nsegs = (MAXPHYS / PAGE_SIZE) + 1;
+ nsegs = (MPT_MAXPHYS / PAGE_SIZE) + 1;
if (mpt_dma_tag_create(mpt, mpt->parent_dmat, 1,
0, BUS_SPACE_MAXADDR, BUS_SPACE_MAXADDR,
NULL, NULL, MAXBSIZE, nsegs, BUS_SPACE_MAXSIZE_32BIT, 0,
diff --git a/sys/dev/trm/trm.h b/sys/dev/trm/trm.h
index 21248835a510..97b318ae8c13 100644
--- a/sys/dev/trm/trm.h
+++ b/sys/dev/trm/trm.h
@@ -94,7 +94,8 @@ typedef struct _SGentry {
#define TRM_MAX_CMD_PER_LUN 32
#define TRM_MAX_SRB_CNT 256
#define TRM_MAX_START_JOB 256
-#define TRM_NSEG (btoc(MAXPHYS) + 1)
+#define TRM_MAXPHYS (128 * 1024)
+#define TRM_NSEG (btoc(TRM_MAXPHYS) + 1)
#define TRM_MAXTRANSFER_SIZE 0xFFFFFF /* restricted by 24 bit counter */
#define PAGELEN 4096
diff --git a/sys/modules/Makefile b/sys/modules/Makefile
index add5a26f013c..a3f708283bfd 100644
--- a/sys/modules/Makefile
+++ b/sys/modules/Makefile
@@ -14,6 +14,7 @@ SUBDIR= ${_3dfx} \
${_agp} \
aha \
${_ahb} \
+ ahci \
${_aic} \
aic7xxx \
aio \
diff --git a/sys/modules/ahci/Makefile b/sys/modules/ahci/Makefile
new file mode 100644
index 000000000000..d6828398372f
--- /dev/null
+++ b/sys/modules/ahci/Makefile
@@ -0,0 +1,8 @@
+# $FreeBSD$
+
+.PATH: ${.CURDIR}/../../dev/ahci
+
+KMOD= ahci
+SRCS= ahci.c ahci.h device_if.h bus_if.h pci_if.h opt_cam.h
+
+.include <bsd.kmod.mk>
diff --git a/sys/modules/cam/Makefile b/sys/modules/cam/Makefile
index 6f88b35c830a..ca5c56a7b1ae 100644
--- a/sys/modules/cam/Makefile
+++ b/sys/modules/cam/Makefile
@@ -2,7 +2,7 @@
S= ${.CURDIR}/../..
-.PATH: $S/cam $S/cam/scsi
+.PATH: $S/cam $S/cam/scsi $S/cam/ata
KMOD= cam
@@ -24,6 +24,10 @@ SRCS+= scsi_sa.c
SRCS+= scsi_ses.c
SRCS+= scsi_sg.c
SRCS+= scsi_targ_bh.c scsi_target.c
+SRCS+= scsi_xpt.c
+SRCS+= ata_all.c
+SRCS+= ata_xpt.c
+SRCS+= ata_da.c
EXPORT_SYMS= YES # XXX evaluate