diff options
author | Julian Grajkowski <julianx.grajkowski@intel.com> | 2022-07-19 08:15:34 +0000 |
---|---|---|
committer | Mark Johnston <markj@FreeBSD.org> | 2022-07-27 15:12:35 +0000 |
commit | 78ee8d1c4cdad7a56dbf50f1c8ade75531ce620c (patch) | |
tree | 4e96ef389636fdec894275b41201927d26d7677c | |
parent | f4f56ff43dbd30930f4b018e39ba2b9abf84551f (diff) | |
download | src-78ee8d1c4cda.tar.gz src-78ee8d1c4cda.zip |
qat: Import a new Intel (R) QAT driver
QAT in-tree driver ported from out-of-tree release available
from 01.org.
The driver exposes complete cryptography and data compression
API in the kernel and integrates with Open Crypto Framework.
Details of supported operations, devices and usage can be found
in man and on 01.org.
Patch co-authored by: Krzysztof Zdziarski <krzysztofx.zdziarski@intel.com>
Patch co-authored by: Michal Jaraczewski <michalx.jaraczewski@intel.com>
Patch co-authored by: Michal Gulbicki <michalx.gulbicki@intel.com>
Patch co-authored by: Julian Grajkowski <julianx.grajkowski@intel.com>
Patch co-authored by: Piotr Kasierski <piotrx.kasierski@intel.com>
Patch co-authored by: Adam Czupryna <adamx.czupryna@intel.com>
Patch co-authored by: Konrad Zelazny <konradx.zelazny@intel.com>
Patch co-authored by: Katarzyna Rucinska <katarzynax.kargol@intel.com>
Patch co-authored by: Lukasz Kolodzinski <lukaszx.kolodzinski@intel.com>
Patch co-authored by: Zbigniew Jedlinski <zbigniewx.jedlinski@intel.com>
Reviewed by: markj, jhb (OCF integration)
Reviewed by: debdrup, pauamma (docs)
Sponsored by: Intel Corporation
Differential Revision: https://reviews.freebsd.org/D34632
255 files changed, 99110 insertions, 11 deletions
diff --git a/share/man/man4/qat.4 b/share/man/man4/qat.4 new file mode 100644 index 000000000000..c6082f873a44 --- /dev/null +++ b/share/man/man4/qat.4 @@ -0,0 +1,127 @@ +.\" SPDX-License-Identifier: BSD-3-Clause +.\" Copyright(c) 2007-2022 Intel Corporation +.\" $FreeBSD$ +.Dd June 30, 2022 +.Dt QAT 4 +.Os +.Sh NAME +.Nm qat +.Nd Intel (R) QuickAssist Technology (QAT) driver +.Sh SYNOPSIS +To load the driver call: +.Pp +.Bl -item -compact +.It +kldload qat +.El +.Pp +In order to load the driver on boot add these lines to +.Xr loader.conf 5 selecting firmware(s) suitable for installed device(s) +.Pp +.Bl -item -compact +.It +qat_200xx_fw_load="YES" +.It +qat_c3xxx_fw_load="YES" +.It +qat_c4xxx_fw_load="YES" +.It +qat_c62x_fw_load="YES" +.It +qat_dh895xcc_fw_load="YES" +.It +qat_load="YES" +.El +.Sh DESCRIPTION +The +.Nm +driver supports cryptography and compression acceleration of the +Intel (R) QuickAssist Technology (QAT) devices. +.Pp +The +.Nm +driver is intended for platforms that contain: +.Bl -bullet -compact +.It +Intel (R) C62x Chipset +.It +Intel (R) Atom C3000 processor product family +.It +Intel (R) QuickAssist Adapter 8960/Intel (R) QuickAssist Adapter 8970 +(formerly known as "Lewis Hill") +.It +Intel (R) Communications Chipset 8925 to 8955 Series +.It +Intel (R) Atom P5300 processor product family +.El +.Pp +The +.Nm +driver supports cryptography and compression acceleration. +A complete API for offloading these operations is exposed in the kernel and may +be used by any other entity directly. +For details of usage and supported operations and algorithms refer to the +following documentation available from +.Lk 01.org : +.Bl -bullet -compact +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology API Programmer's Guide +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Cryptographic API Reference Manual +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Data Compression API Reference Manual +.Re +.It +.Rs +.%A Intel (R) +.%T QuickAssist Technology Performance Optimization Guide +.Re +.El +.Pp +In addition to exposing complete kernel API for offloading cryptography and +compression operations, the +.Nm +driver also integrates with +.Xr crypto 4 , +allowing offloading supported cryptography operations to Intel (R) QuickAssist +Technology (QAT) devices. +For details of usage and supported operations and algorithms refer to the +documentation mentioned above and +.Sx SEE ALSO +section. +.Sh COMPATIBILITY +The +.Nm +driver replaced previous implementation introduced in +.Fx 13.0 . +Current version, in addition to +.Xr crypto 4 +integration, supports also data compression and exposes a complete API for +offloading data compression and cryptography operations. +.Sh SEE ALSO +.Xr crypto 4 , +.Xr ipsec 4 , +.Xr pci 4 , +.Xr crypto 7 , +.Xr crypto 9 +.Sh HISTORY +This +.Nm +driver was introduced in +.Fx 14.0 . +.Fx 13.0 included a different version of +.Nm +driver. +.Sh AUTHORS +The +.Nm +driver was written by +.An Intel (R) Corporation . diff --git a/sys/contrib/dev/qat/LICENSE b/sys/contrib/dev/qat/LICENSE index 266294fd4275..2d9af4268f0f 100644 --- a/sys/contrib/dev/qat/LICENSE +++ b/sys/contrib/dev/qat/LICENSE @@ -1,11 +1,39 @@ -Copyright (c) 2007-2016 Intel Corporation. -All rights reserved. -Redistribution. Redistribution and use in binary form, without modification, are permitted provided that the following conditions are met: +Copyright (c) 2021 Intel Corporation - Redistributions must reproduce the above copyright notice and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Intel Corporation nor the names of its suppliers may be used to endorse or promote products derived from this software without specific prior written permission. - No reverse engineering, decompilation, or disassembly of this software is permitted. - -Limited patent license. Intel Corporation grants a world-wide, royalty-free, non-exclusive license under patents it now or hereafter owns or controls to make, have made, use, import, offer to sell and sell ("Utilize") this software, but solely to the extent that any such patent is necessary to Utilize the software alone. The patent license shall not apply to any combinations which include this software. No hardware per se is licensed hereunder. +Redistribution. Redistribution and use in binary form, without +modification, are permitted provided that the following conditions are +met: + +* Redistributions must reproduce the above copyright notice and the + following disclaimer in the documentation and/or other materials + provided with the distribution. +* Neither the name of Intel Corporation nor the names of its suppliers + may be used to endorse or promote products derived from this software + without specific prior written permission. +* No reverse engineering, decompilation, or disassembly of this software + is permitted. + +Limited patent license. Intel Corporation grants a world-wide, +royalty-free, non-exclusive license under patents it now or hereafter +owns or controls to make, have made, use, import, offer to sell and +sell ("Utilize") this software, but solely to the extent that any +such patent is necessary to Utilize the software alone, or in +combination with an operating system licensed under an approved Open +Source license as listed by the Open Source Initiative at +http://opensource.org/licenses. The patent license shall not apply to +any other combinations which include this software. No hardware per +se is licensed hereunder. + +DISCLAIMER. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND +CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, +BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND +FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE +COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS +OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND +ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR +TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH +DAMAGE. -DISCLAIMER. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/sys/contrib/dev/qat/qat_200xx.bin b/sys/contrib/dev/qat/qat_200xx.bin Binary files differnew file mode 100644 index 000000000000..8d1ba6ffc5f6 --- /dev/null +++ b/sys/contrib/dev/qat/qat_200xx.bin diff --git a/sys/contrib/dev/qat/qat_200xx_mmp.bin b/sys/contrib/dev/qat/qat_200xx_mmp.bin Binary files differnew file mode 100644 index 000000000000..0f1f811eef41 --- /dev/null +++ b/sys/contrib/dev/qat/qat_200xx_mmp.bin diff --git a/sys/contrib/dev/qat/qat_895xcc.bin b/sys/contrib/dev/qat/qat_895xcc.bin Binary files differnew file mode 100644 index 000000000000..a642e1dc73aa --- /dev/null +++ b/sys/contrib/dev/qat/qat_895xcc.bin diff --git a/sys/contrib/dev/qat/qat_895xcc_mmp.bin b/sys/contrib/dev/qat/qat_895xcc_mmp.bin Binary files differnew file mode 100644 index 000000000000..f2c0abe493cd --- /dev/null +++ b/sys/contrib/dev/qat/qat_895xcc_mmp.bin diff --git a/sys/contrib/dev/qat/qat_c3xxx.bin b/sys/contrib/dev/qat/qat_c3xxx.bin Binary files differnew file mode 100644 index 000000000000..8d1ba6ffc5f6 --- /dev/null +++ b/sys/contrib/dev/qat/qat_c3xxx.bin diff --git a/sys/contrib/dev/qat/qat_c3xxx_mmp.bin b/sys/contrib/dev/qat/qat_c3xxx_mmp.bin Binary files differnew file mode 100644 index 000000000000..0f1f811eef41 --- /dev/null +++ b/sys/contrib/dev/qat/qat_c3xxx_mmp.bin diff --git a/sys/contrib/dev/qat/qat_c4xxx.bin b/sys/contrib/dev/qat/qat_c4xxx.bin Binary files differnew file mode 100644 index 000000000000..040e31499911 --- /dev/null +++ b/sys/contrib/dev/qat/qat_c4xxx.bin diff --git a/sys/contrib/dev/qat/qat_c4xxx_mmp.bin b/sys/contrib/dev/qat/qat_c4xxx_mmp.bin Binary files differnew file mode 100644 index 000000000000..5a14ff1a47da --- /dev/null +++ b/sys/contrib/dev/qat/qat_c4xxx_mmp.bin diff --git a/sys/contrib/dev/qat/qat_c62x.bin b/sys/contrib/dev/qat/qat_c62x.bin Binary files differnew file mode 100644 index 000000000000..85cb23892baa --- /dev/null +++ b/sys/contrib/dev/qat/qat_c62x.bin diff --git a/sys/contrib/dev/qat/qat_c62x_mmp.bin b/sys/contrib/dev/qat/qat_c62x_mmp.bin Binary files differnew file mode 100644 index 000000000000..3c334a5d68f0 --- /dev/null +++ b/sys/contrib/dev/qat/qat_c62x_mmp.bin diff --git a/sys/dev/qat/include/adf_cfg_dev_dbg.h b/sys/dev/qat/include/adf_cfg_dev_dbg.h new file mode 100644 index 000000000000..2fc7884c10b2 --- /dev/null +++ b/sys/dev/qat/include/adf_cfg_dev_dbg.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_DEV_DBG_H_ +#define ADF_CFG_DEV_DBG_H_ + +struct adf_accel_dev; + +int adf_cfg_dev_dbg_add(struct adf_accel_dev *accel_dev); +void adf_cfg_dev_dbg_remove(struct adf_accel_dev *accel_dev); + +#endif /* ADF_CFG_DEV_DBG_H_ */ diff --git a/sys/dev/qat/include/adf_cfg_device.h b/sys/dev/qat/include/adf_cfg_device.h new file mode 100644 index 000000000000..40fb91119f03 --- /dev/null +++ b/sys/dev/qat/include/adf_cfg_device.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_DEVICE_H_ +#define ADF_CFG_DEVICE_H_ + +#include "adf_cfg.h" +#include "sal_statistics_strings.h" + +#define ADF_CFG_STATIC_CONF_VER 2 +#define ADF_CFG_STATIC_CONF_CY_ASYM_RING_SIZE 64 +#define ADF_CFG_STATIC_CONF_CY_SYM_RING_SIZE 512 +#define ADF_CFG_STATIC_CONF_DC_INTER_BUF_SIZE 64 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ENABLED 1 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DC 1 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DH 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DRBG 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_DSA 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_ECC 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_KEYGEN 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_LN 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_PRIME 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_RSA 0 +#define ADF_CFG_STATIC_CONF_SAL_STATS_CFG_SYM 1 +#define ADF_CFG_STATIC_CONF_POLL 1 +#define ADF_CFG_STATIC_CONF_IRQ 0 +#define ADF_CFG_STATIC_CONF_AUTO_RESET 0 +#define ADF_CFG_STATIC_CONF_NUM_DC_ACCEL_UNITS 2 +#define ADF_CFG_STATIC_CONF_NUM_INLINE_ACCEL_UNITS 0 +#define ADF_CFG_STATIC_CONF_INST_NUM_DC 2 +#define ADF_CFG_STATIC_CONF_INST_NUM_CY_POLL 2 +#define ADF_CFG_STATIC_CONF_INST_NUM_CY_IRQ 2 + +#define ADF_CFG_FW_STRING_TO_ID(str, acc, id) \ + do { \ + typeof(id) id_ = (id); \ + typeof(str) str_; \ + memcpy(str_, (str), sizeof(str_)); \ + if (!strncmp(str_, \ + ADF_SERVICES_DEFAULT, \ + sizeof(ADF_SERVICES_DEFAULT))) \ + *id_ = ADF_FW_IMAGE_DEFAULT; \ + else if (!strncmp(str_, \ + ADF_SERVICES_CRYPTO, \ + sizeof(ADF_SERVICES_CRYPTO))) \ + *id_ = ADF_FW_IMAGE_CRYPTO; \ + else if (!strncmp(str_, \ + ADF_SERVICES_COMPRESSION, \ + sizeof(ADF_SERVICES_COMPRESSION))) \ + *id_ = ADF_FW_IMAGE_COMPRESSION; \ + else if (!strncmp(str_, \ + ADF_SERVICES_CUSTOM1, \ + sizeof(ADF_SERVICES_CUSTOM1))) \ + *id_ = ADF_FW_IMAGE_CUSTOM1; \ + else { \ + *id_ = ADF_FW_IMAGE_DEFAULT; \ + device_printf(GET_DEV(acc), \ + "Invalid SerivesProfile: %s," \ + "Using DEFAULT image\n", \ + str_); \ + } \ + } while (0) + +int adf_cfg_get_ring_pairs(struct adf_cfg_device *device, + struct adf_cfg_instance *inst, + const char *process_name, + struct adf_accel_dev *accel_dev); + +int adf_cfg_device_init(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev); + +void adf_cfg_device_clear(struct adf_cfg_device *device, + struct adf_accel_dev *accel_dev); + +#endif diff --git a/sys/dev/qat/include/adf_cnvnr_freq_counters.h b/sys/dev/qat/include/adf_cnvnr_freq_counters.h new file mode 100644 index 000000000000..c9b38679aa4d --- /dev/null +++ b/sys/dev/qat/include/adf_cnvnr_freq_counters.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CNVNR_CTRS_DBG_H_ +#define ADF_CNVNR_CTRS_DBG_H_ + +struct adf_accel_dev; +int adf_cnvnr_freq_counters_add(struct adf_accel_dev *accel_dev); +void adf_cnvnr_freq_counters_remove(struct adf_accel_dev *accel_dev); + +#endif /* ADF_CNVNR_CTRS_DBG_H_ */ diff --git a/sys/dev/qat/include/adf_dev_err.h b/sys/dev/qat/include/adf_dev_err.h new file mode 100644 index 000000000000..b82f91eafc87 --- /dev/null +++ b/sys/dev/qat/include/adf_dev_err.h @@ -0,0 +1,80 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_DEV_ERR_H_ +#define ADF_DEV_ERR_H_ + +#include <sys/types.h> +#include <dev/pci/pcivar.h> +#include "adf_accel_devices.h" + +#define ADF_ERRSOU0 (0x3A000 + 0x00) +#define ADF_ERRSOU1 (0x3A000 + 0x04) +#define ADF_ERRSOU2 (0x3A000 + 0x08) +#define ADF_ERRSOU3 (0x3A000 + 0x0C) +#define ADF_ERRSOU4 (0x3A000 + 0xD0) +#define ADF_ERRSOU5 (0x3A000 + 0xD8) +#define ADF_ERRMSK0 (0x3A000 + 0x10) +#define ADF_ERRMSK1 (0x3A000 + 0x14) +#define ADF_ERRMSK2 (0x3A000 + 0x18) +#define ADF_ERRMSK3 (0x3A000 + 0x1C) +#define ADF_ERRMSK4 (0x3A000 + 0xD4) +#define ADF_ERRMSK5 (0x3A000 + 0xDC) +#define ADF_EMSK3_CPM0_MASK BIT(2) +#define ADF_EMSK3_CPM1_MASK BIT(3) +#define ADF_EMSK5_CPM2_MASK BIT(16) +#define ADF_EMSK5_CPM3_MASK BIT(17) +#define ADF_EMSK5_CPM4_MASK BIT(18) +#define ADF_RICPPINTSTS (0x3A000 + 0x114) +#define ADF_RIERRPUSHID (0x3A000 + 0x118) +#define ADF_RIERRPULLID (0x3A000 + 0x11C) +#define ADF_CPP_CFC_ERR_STATUS (0x30000 + 0xC04) +#define ADF_CPP_CFC_ERR_PPID (0x30000 + 0xC08) +#define ADF_TICPPINTSTS (0x3A400 + 0x13C) +#define ADF_TIERRPUSHID (0x3A400 + 0x140) +#define ADF_TIERRPULLID (0x3A400 + 0x144) +#define ADF_SECRAMUERR (0x3AC00 + 0x04) +#define ADF_SECRAMUERRAD (0x3AC00 + 0x0C) +#define ADF_CPPMEMTGTERR (0x3AC00 + 0x10) +#define ADF_ERRPPID (0x3AC00 + 0x14) +#define ADF_INTSTATSSM(i) ((i)*0x4000 + 0x04) +#define ADF_INTSTATSSM_SHANGERR BIT(13) +#define ADF_PPERR(i) ((i)*0x4000 + 0x08) +#define ADF_PPERRID(i) ((i)*0x4000 + 0x0C) +#define ADF_CERRSSMSH(i) ((i)*0x4000 + 0x10) +#define ADF_UERRSSMSH(i) ((i)*0x4000 + 0x18) +#define ADF_UERRSSMSHAD(i) ((i)*0x4000 + 0x1C) +#define ADF_SLICEHANGSTATUS(i) ((i)*0x4000 + 0x4C) +#define ADF_SLICE_HANG_AUTH0_MASK BIT(0) +#define ADF_SLICE_HANG_AUTH1_MASK BIT(1) +#define ADF_SLICE_HANG_AUTH2_MASK BIT(2) +#define ADF_SLICE_HANG_CPHR0_MASK BIT(4) +#define ADF_SLICE_HANG_CPHR1_MASK BIT(5) +#define ADF_SLICE_HANG_CPHR2_MASK BIT(6) +#define ADF_SLICE_HANG_CMP0_MASK BIT(8) +#define ADF_SLICE_HANG_CMP1_MASK BIT(9) +#define ADF_SLICE_HANG_XLT0_MASK BIT(12) +#define ADF_SLICE_HANG_XLT1_MASK BIT(13) +#define ADF_SLICE_HANG_MMP0_MASK BIT(16) +#define ADF_SLICE_HANG_MMP1_MASK BIT(17) +#define ADF_SLICE_HANG_MMP2_MASK BIT(18) +#define ADF_SLICE_HANG_MMP3_MASK BIT(19) +#define ADF_SLICE_HANG_MMP4_MASK BIT(20) +#define ADF_SSMWDT(i) ((i)*0x4000 + 0x54) +#define ADF_SSMWDTPKE(i) ((i)*0x4000 + 0x58) +#define ADF_SHINTMASKSSM(i) ((i)*0x4000 + 0x1018) +#define ADF_ENABLE_SLICE_HANG 0x000000 +#define ADF_MAX_MMP (5) +#define ADF_MMP_BASE(i) ((i)*0x1000 % 0x3800) +#define ADF_CERRSSMMMP(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x380) +#define ADF_UERRSSMMMP(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x388) +#define ADF_UERRSSMMMPAD(i, n) ((i)*0x4000 + ADF_MMP_BASE(n) + 0x38C) + +bool adf_handle_slice_hang(struct adf_accel_dev *accel_dev, + u8 accel_num, + struct resource *csr, + u32 slice_hang_offset); +bool adf_check_slice_hang(struct adf_accel_dev *accel_dev); +void adf_print_err_registers(struct adf_accel_dev *accel_dev); + +#endif diff --git a/sys/dev/qat/include/adf_freebsd_pfvf_ctrs_dbg.h b/sys/dev/qat/include/adf_freebsd_pfvf_ctrs_dbg.h new file mode 100644 index 000000000000..d413279fc000 --- /dev/null +++ b/sys/dev/qat/include/adf_freebsd_pfvf_ctrs_dbg.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_PFVF_CTRS_DBG_H_ +#define ADF_PFVF_CTRS_DBG_H_ + +struct adf_accel_dev; +int adf_pfvf_ctrs_dbg_add(struct adf_accel_dev *accel_dev); + +#endif /* ADF_PFVF_CTRS_DBG_H_ */ diff --git a/sys/dev/qat/include/adf_fw_counters.h b/sys/dev/qat/include/adf_fw_counters.h new file mode 100644 index 000000000000..5fddb72eec33 --- /dev/null +++ b/sys/dev/qat/include/adf_fw_counters.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_FW_COUNTERS_H_ +#define ADF_FW_COUNTERS_H_ + +#include <linux/rwsem.h> +#include "adf_accel_devices.h" + +#define FW_COUNTERS_MAX_STR_LEN 64 +#define FW_COUNTERS_MAX_KEY_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define FW_COUNTERS_MAX_VAL_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define FW_COUNTERS_MAX_SECTION_LEN_IN_BYTES FW_COUNTERS_MAX_STR_LEN +#define ADF_FW_COUNTERS_NO_RESPONSE -1 + +struct adf_fw_counters_val { + char key[FW_COUNTERS_MAX_KEY_LEN_IN_BYTES]; + char val[FW_COUNTERS_MAX_VAL_LEN_IN_BYTES]; + struct list_head list; +}; + +struct adf_fw_counters_section { + char name[FW_COUNTERS_MAX_SECTION_LEN_IN_BYTES]; + struct list_head list; + struct list_head param_head; +}; + +struct adf_fw_counters_data { + struct list_head ae_sec_list; + struct sysctl_oid *debug; + struct rw_semaphore lock; +}; + +int adf_fw_counters_add(struct adf_accel_dev *accel_dev); +void adf_fw_counters_remove(struct adf_accel_dev *accel_dev); +int adf_fw_count_ras_event(struct adf_accel_dev *accel_dev, + u32 *ras_event, + char *aeidstr); + +#endif /* ADF_FW_COUNTERS_H_ */ diff --git a/sys/dev/qat/include/adf_heartbeat.h b/sys/dev/qat/include/adf_heartbeat.h new file mode 100644 index 000000000000..55ca58152017 --- /dev/null +++ b/sys/dev/qat/include/adf_heartbeat.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_HEARTBEAT_H_ +#define ADF_HEARTBEAT_H_ + +#include "adf_cfg_common.h" + +struct adf_accel_dev; + +struct qat_sysctl { + unsigned int hb_sysctlvar; + struct sysctl_oid *oid; +}; + +struct adf_heartbeat { + unsigned int hb_sent_counter; + unsigned int hb_failed_counter; + u64 last_hb_check_time; + enum adf_device_heartbeat_status last_hb_status; + struct qat_sysctl heartbeat; + struct qat_sysctl *heartbeat_sent; + struct qat_sysctl *heartbeat_failed; +}; + +int adf_heartbeat_init(struct adf_accel_dev *accel_dev); +void adf_heartbeat_clean(struct adf_accel_dev *accel_dev); + +int adf_get_hb_timer(struct adf_accel_dev *accel_dev, unsigned int *value); +int adf_get_heartbeat_status(struct adf_accel_dev *accel_dev); +int adf_heartbeat_status(struct adf_accel_dev *accel_dev, + enum adf_device_heartbeat_status *hb_status); +#endif /* ADF_HEARTBEAT_H_ */ diff --git a/sys/dev/qat/include/adf_heartbeat_dbg.h b/sys/dev/qat/include/adf_heartbeat_dbg.h new file mode 100644 index 000000000000..2d63e62398c2 --- /dev/null +++ b/sys/dev/qat/include/adf_heartbeat_dbg.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_HEARTBEAT_DBG_H_ +#define ADF_HEARTBEAT_DBG_H_ + +struct adf_accel_dev; +int adf_heartbeat_dbg_add(struct adf_accel_dev *accel_dev); +int adf_heartbeat_dbg_del(struct adf_accel_dev *accel_dev); + +#endif /* ADF_HEARTBEAT_DBG_H_ */ diff --git a/sys/dev/qat/include/adf_pf2vf_msg.h b/sys/dev/qat/include/adf_pf2vf_msg.h new file mode 100644 index 000000000000..9c8462a8f6b6 --- /dev/null +++ b/sys/dev/qat/include/adf_pf2vf_msg.h @@ -0,0 +1,182 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_PF2VF_MSG_H +#define ADF_PF2VF_MSG_H + +/* + * PF<->VF Messaging + * The PF has an array of 32-bit PF2VF registers, one for each VF. The + * PF can access all these registers; each VF can access only the one + * register associated with that particular VF. + * + * The register functionally is split into two parts: + * The bottom half is for PF->VF messages. In particular when the first + * bit of this register (bit 0) gets set an interrupt will be triggered + * in the respective VF. + * The top half is for VF->PF messages. In particular when the first bit + * of this half of register (bit 16) gets set an interrupt will be triggered + * in the PF. + * + * The remaining bits within this register are available to encode messages. + * and implement a collision control mechanism to prevent concurrent use of + * the PF2VF register by both the PF and VF. + * + * 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 + * _______________________________________________ + * | | | | | | | | | | | | | | | | | + * +-----------------------------------------------+ + * \___________________________/ \_________/ ^ ^ + * ^ ^ | | + * | | | VF2PF Int + * | | Message Origin + * | Message Type + * Message-specific Data/Reserved + * + * 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 + * _______________________________________________ + * | | | | | | | | | | | | | | | | | + * +-----------------------------------------------+ + * \___________________________/ \_________/ ^ ^ + * ^ ^ | | + * | | | PF2VF Int + * | | Message Origin + * | Message Type + * Message-specific Data/Reserved + * + * Message Origin (Should always be 1) + * A legacy out-of-tree QAT driver allowed for a set of messages not supported + * by this driver; these had a Msg Origin of 0 and are ignored by this driver. + * + * When a PF or VF attempts to send a message in the lower or upper 16 bits, + * respectively, the other 16 bits are written to first with a defined + * IN_USE_BY pattern as part of a collision control scheme (see adf_iov_putmsg). + */ + +/* VF/PF compatibility version. */ +/* ADF_PFVF_COMPATIBILITY_EXT_CAP: Support for extended capabilities */ +#define ADF_PFVF_COMPATIBILITY_CAPABILITIES 2 +/* ADF_PFVF_COMPATIBILITY_FAST_ACK: In-use pattern cleared by receiver */ +#define ADF_PFVF_COMPATIBILITY_FAST_ACK 3 +#define ADF_PFVF_COMPATIBILITY_RING_TO_SVC_MAP 4 +#define ADF_PFVF_COMPATIBILITY_VERSION 4 /* PF<->VF compat */ + +/* PF->VF messages */ +#define ADF_PF2VF_INT BIT(0) +#define ADF_PF2VF_MSGORIGIN_SYSTEM BIT(1) +#define ADF_PF2VF_MSGTYPE_MASK 0x0000003C +#define ADF_PF2VF_MSGTYPE_SHIFT 2 +#define ADF_PF2VF_MSGTYPE_RESTARTING 0x01 +#define ADF_PF2VF_MSGTYPE_VERSION_RESP 0x02 +#define ADF_PF2VF_MSGTYPE_BLOCK_RESP 0x03 +#define ADF_PF2VF_MSGTYPE_FATAL_ERROR 0x04 +#define ADF_PF2VF_IN_USE_BY_PF 0x6AC20000 +#define ADF_PF2VF_IN_USE_BY_PF_MASK 0xFFFE0000 + +/* PF->VF Version Response */ +#define ADF_PF2VF_VERSION_RESP_VERS_MASK 0x00003FC0 +#define ADF_PF2VF_VERSION_RESP_VERS_SHIFT 6 +#define ADF_PF2VF_VERSION_RESP_RESULT_MASK 0x0000C000 +#define ADF_PF2VF_VERSION_RESP_RESULT_SHIFT 14 +#define ADF_PF2VF_MINORVERSION_SHIFT 6 +#define ADF_PF2VF_MAJORVERSION_SHIFT 10 +#define ADF_PF2VF_VF_COMPATIBLE 1 +#define ADF_PF2VF_VF_INCOMPATIBLE 2 +#define ADF_PF2VF_VF_COMPAT_UNKNOWN 3 + +/* PF->VF Block Request Type */ +#define ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE 0 +#define ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE (ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE + 15) +#define ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE (ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE \ + (ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE + 7) +#define ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE (ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE (ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE + 3) +#define ADF_VF2PF_SMALL_PAYLOAD_SIZE 30 +#define ADF_VF2PF_MEDIUM_PAYLOAD_SIZE 62 +#define ADF_VF2PF_LARGE_PAYLOAD_SIZE 126 + +#define ADF_VF2PF_MAX_BLOCK_TYPE 3 +#define ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT 22 +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT 24 +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT 25 +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT 26 +#define ADF_VF2PF_BLOCK_REQ_CRC_SHIFT 31 +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_MASK 0x7F000000 +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_MASK 0x7E000000 +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_MASK 0x7C000000 +#define ADF_VF2PF_LARGE_BLOCK_REQ_TYPE_MASK 0xC00000 +#define ADF_VF2PF_MEDIUM_BLOCK_REQ_TYPE_MASK 0x1C00000 +#define ADF_VF2PF_SMALL_BLOCK_REQ_TYPE_MASK 0x3C00000 + +/* PF->VF Block Response Type */ +#define ADF_PF2VF_BLOCK_RESP_TYPE_DATA 0x0 +#define ADF_PF2VF_BLOCK_RESP_TYPE_CRC 0x1 +#define ADF_PF2VF_BLOCK_RESP_TYPE_ERROR 0x2 +#define ADF_PF2VF_BLOCK_RESP_TYPE_SHIFT 6 +#define ADF_PF2VF_BLOCK_RESP_DATA_SHIFT 8 +#define ADF_PF2VF_BLOCK_RESP_TYPE_MASK 0x000000C0 +#define ADF_PF2VF_BLOCK_RESP_DATA_MASK 0x0000FF00 + +/* PF-VF block message header bytes */ +#define ADF_VF2PF_BLOCK_VERSION_BYTE 0 +#define ADF_VF2PF_BLOCK_LEN_BYTE 1 +#define ADF_VF2PF_BLOCK_DATA 2 + +/* PF->VF Block Error Code */ +#define ADF_PF2VF_INVALID_BLOCK_TYPE 0x0 +#define ADF_PF2VF_INVALID_BYTE_NUM_REQ 0x1 +#define ADF_PF2VF_PAYLOAD_TRUNCATED 0x2 +#define ADF_PF2VF_UNSPECIFIED_ERROR 0x3 + +/* VF->PF messages */ +#define ADF_VF2PF_IN_USE_BY_VF 0x00006AC2 +#define ADF_VF2PF_IN_USE_BY_VF_MASK 0x0000FFFE +#define ADF_VF2PF_INT BIT(16) +#define ADF_VF2PF_MSGORIGIN_SYSTEM BIT(17) +#define ADF_VF2PF_MSGTYPE_MASK 0x003C0000 +#define ADF_VF2PF_MSGTYPE_SHIFT 18 +#define ADF_VF2PF_MSGTYPE_INIT 0x3 +#define ADF_VF2PF_MSGTYPE_SHUTDOWN 0x4 +#define ADF_VF2PF_MSGTYPE_VERSION_REQ 0x5 +#define ADF_VF2PF_MSGTYPE_COMPAT_VER_REQ 0x6 +#define ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ 0x7 +#define ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ 0x8 +#define ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ 0x9 +#define ADF_VF2PF_MSGTYPE_NOTIFY 0xa +#define ADF_VF2PF_MSGGENC_RESTARTING_COMPLETE 0x0 + +/* Block message types + * 0..15 - 32 byte message + * 16..23 - 64 byte message + * 24..27 - 128 byte message + * 2 - Get Capability Request message + */ +#define ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY 2 +#define ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ 0x3 + +/* VF->PF Compatible Version Request */ +#define ADF_VF2PF_COMPAT_VER_REQ_SHIFT 22 + +/* How long to wait for far side to acknowledge receipt */ +#define ADF_IOV_MSG_ACK_DELAY_US 5 +#define ADF_IOV_MSG_ACK_EXP_MAX_DELAY_US (5 * 1000) +#define ADF_IOV_MSG_ACK_DELAY_MS 5 +#define ADF_IOV_MSG_ACK_LIN_MAX_DELAY_US (2 * 1000 * 1000) +/* If CSR is busy, how long to delay before retrying */ +#define ADF_IOV_MSG_RETRY_DELAY 5 +#define ADF_IOV_MSG_MAX_RETRIES 10 +/* How long to wait for a response from the other side */ +#define ADF_IOV_MSG_RESP_TIMEOUT 100 +/* How often to retry when there is no response */ +#define ADF_IOV_MSG_RESP_RETRIES 5 + +#define ADF_IOV_RATELIMIT_INTERVAL 8 +#define ADF_IOV_RATELIMIT_BURST 130 + +/* CRC Calculation */ +#define ADF_CRC8_INIT_VALUE 0xFF +/* PF VF message byte shift */ +#define ADF_PFVF_DATA_SHIFT 8 +#define ADF_PFVF_DATA_MASK 0xFF +#endif /* ADF_IOV_MSG_H */ diff --git a/sys/dev/qat/include/adf_ver_dbg.h b/sys/dev/qat/include/adf_ver_dbg.h new file mode 100644 index 000000000000..be4ed24df751 --- /dev/null +++ b/sys/dev/qat/include/adf_ver_dbg.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_VER_DBG_H_ +#define ADF_VER_DBG_H_ + +struct adf_accel_dev; +int adf_ver_dbg_add(struct adf_accel_dev *accel_dev); +void adf_ver_dbg_del(struct adf_accel_dev *accel_dev); + +#endif /* ADF_VER_DBG_H_ */ diff --git a/sys/dev/qat/include/common/adf_accel_devices.h b/sys/dev/qat/include/common/adf_accel_devices.h new file mode 100644 index 000000000000..ad0e74335259 --- /dev/null +++ b/sys/dev/qat/include/common/adf_accel_devices.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_ACCEL_DEVICES_H_ +#define ADF_ACCEL_DEVICES_H_ + +#include "qat_freebsd.h" +#include "adf_cfg_common.h" + +#define ADF_CFG_NUM_SERVICES 4 + +#define ADF_DH895XCC_DEVICE_NAME "dh895xcc" +#define ADF_DH895XCCVF_DEVICE_NAME "dh895xccvf" +#define ADF_C62X_DEVICE_NAME "c6xx" +#define ADF_C62XVF_DEVICE_NAME "c6xxvf" +#define ADF_C3XXX_DEVICE_NAME "c3xxx" +#define ADF_C3XXXVF_DEVICE_NAME "c3xxxvf" +#define ADF_200XX_DEVICE_NAME "200xx" +#define ADF_200XXVF_DEVICE_NAME "200xxvf" +#define ADF_C4XXX_DEVICE_NAME "c4xxx" +#define ADF_C4XXXVF_DEVICE_NAME "c4xxxvf" +#define ADF_DH895XCC_PCI_DEVICE_ID 0x435 +#define ADF_DH895XCCIOV_PCI_DEVICE_ID 0x443 +#define ADF_C62X_PCI_DEVICE_ID 0x37c8 +#define ADF_C62XIOV_PCI_DEVICE_ID 0x37c9 +#define ADF_C3XXX_PCI_DEVICE_ID 0x19e2 +#define ADF_C3XXXIOV_PCI_DEVICE_ID 0x19e3 +#define ADF_200XX_PCI_DEVICE_ID 0x18ee +#define ADF_200XXIOV_PCI_DEVICE_ID 0x18ef +#define ADF_D15XX_PCI_DEVICE_ID 0x6f54 +#define ADF_D15XXIOV_PCI_DEVICE_ID 0x6f55 +#define ADF_C4XXX_PCI_DEVICE_ID 0x18a0 +#define ADF_C4XXXIOV_PCI_DEVICE_ID 0x18a1 + +#define IS_QAT_GEN3(ID) ({ (ID == ADF_C4XXX_PCI_DEVICE_ID); }) +#define ADF_VF2PF_SET_SIZE 32 +#define ADF_MAX_VF2PF_SET 4 +#define ADF_VF2PF_SET_OFFSET(set_nr) ((set_nr)*ADF_VF2PF_SET_SIZE) +#define ADF_VF2PF_VFNR_TO_SET(vf_nr) ((vf_nr) / ADF_VF2PF_SET_SIZE) +#define ADF_VF2PF_VFNR_TO_MASK(vf_nr) \ + ({ \ + u32 vf_nr_ = (vf_nr); \ + BIT((vf_nr_)-ADF_VF2PF_SET_SIZE *ADF_VF2PF_VFNR_TO_SET( \ + vf_nr_)); \ + }) + +#define ADF_DEVICE_FUSECTL_OFFSET 0x40 +#define ADF_DEVICE_LEGFUSE_OFFSET 0x4C +#define ADF_DEVICE_FUSECTL_MASK 0x80000000 +#define ADF_PCI_MAX_BARS 3 +#define ADF_DEVICE_NAME_LENGTH 32 +#define ADF_ETR_MAX_RINGS_PER_BANK 16 +#define ADF_MAX_MSIX_VECTOR_NAME 16 +#define ADF_DEVICE_NAME_PREFIX "qat_" +#define ADF_STOP_RETRY 50 +#define ADF_NUM_THREADS_PER_AE (8) +#define ADF_AE_ADMIN_THREAD (7) +#define ADF_NUM_PKE_STRAND (2) +#define ADF_AE_STRAND0_THREAD (8) +#define ADF_AE_STRAND1_THREAD (9) +#define ADF_NUM_HB_CNT_PER_AE (ADF_NUM_THREADS_PER_AE + ADF_NUM_PKE_STRAND) +#define ADF_CFG_NUM_SERVICES 4 +#define ADF_SRV_TYPE_BIT_LEN 3 +#define ADF_SRV_TYPE_MASK 0x7 +#define ADF_RINGS_PER_SRV_TYPE 2 +#define ADF_THRD_ABILITY_BIT_LEN 4 +#define ADF_THRD_ABILITY_MASK 0xf +#define ADF_VF_OFFSET 0x8 +#define ADF_MAX_FUNC_PER_DEV 0x7 +#define ADF_PCI_DEV_OFFSET 0x3 + +#define ADF_SRV_TYPE_BIT_LEN 3 +#define ADF_SRV_TYPE_MASK 0x7 + +#define GET_SRV_TYPE(ena_srv_mask, srv) \ + (((ena_srv_mask) >> (ADF_SRV_TYPE_BIT_LEN * (srv))) & ADF_SRV_TYPE_MASK) + +#define ADF_DEFAULT_RING_TO_SRV_MAP \ + (CRYPTO | CRYPTO << ADF_CFG_SERV_RING_PAIR_1_SHIFT | \ + NA << ADF_CFG_SERV_RING_PAIR_2_SHIFT | \ + COMP << ADF_CFG_SERV_RING_PAIR_3_SHIFT) + +enum adf_accel_capabilities { + ADF_ACCEL_CAPABILITIES_NULL = 0, + ADF_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = 1, + ADF_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = 2, + ADF_ACCEL_CAPABILITIES_CIPHER = 4, + ADF_ACCEL_CAPABILITIES_AUTHENTICATION = 8, + ADF_ACCEL_CAPABILITIES_COMPRESSION = 32, + ADF_ACCEL_CAPABILITIES_DEPRECATED = 64, + ADF_ACCEL_CAPABILITIES_RANDOM_NUMBER = 128 +}; + +struct adf_bar { + rman_res_t base_addr; + struct resource *virt_addr; + rman_res_t size; +} __packed; + +struct adf_accel_msix { + struct msix_entry *entries; + u32 num_entries; +} __packed; + +struct adf_accel_pci { + device_t pci_dev; + struct adf_accel_msix msix_entries; + struct adf_bar pci_bars[ADF_PCI_MAX_BARS]; + uint8_t revid; + uint8_t sku; + int node; +} __packed; + +enum dev_state { DEV_DOWN = 0, DEV_UP }; + +enum dev_sku_info { + DEV_SKU_1 = 0, + DEV_SKU_2, + DEV_SKU_3, + DEV_SKU_4, + DEV_SKU_VF, + DEV_SKU_1_CY, + DEV_SKU_2_CY, + DEV_SKU_3_CY, + DEV_SKU_UNKNOWN +}; + +static inline const char * +get_sku_info(enum dev_sku_info info) +{ + switch (info) { + case DEV_SKU_1: + return "SKU1"; + case DEV_SKU_1_CY: + return "SKU1CY"; + case DEV_SKU_2: + return "SKU2"; + case DEV_SKU_2_CY: + return "SKU2CY"; + case DEV_SKU_3: + return "SKU3"; + case DEV_SKU_3_CY: + return "SKU3CY"; + case DEV_SKU_4: + return "SKU4"; + case DEV_SKU_VF: + return "SKUVF"; + case DEV_SKU_UNKNOWN: + default: + break; + } + return "Unknown SKU"; +} + +enum adf_accel_unit_services { + ADF_ACCEL_SERVICE_NULL = 0, + ADF_ACCEL_INLINE_CRYPTO = 1, + ADF_ACCEL_CRYPTO = 2, + ADF_ACCEL_COMPRESSION = 4 +}; + +struct adf_ae_info { + u32 num_asym_thd; + u32 num_sym_thd; + u32 num_dc_thd; +} __packed; + +struct adf_accel_unit { + u8 au_mask; + u32 accel_mask; + u64 ae_mask; + u64 comp_ae_mask; + u32 num_ae; + enum adf_accel_unit_services services; +} __packed; + +struct adf_accel_unit_info { + u32 inline_ingress_msk; + u32 inline_egress_msk; + u32 sym_ae_msk; + u32 asym_ae_msk; + u32 dc_ae_msk; + u8 num_cy_au; + u8 num_dc_au; + u8 num_inline_au; + struct adf_accel_unit *au; + const struct adf_ae_info *ae_info; +} __packed; + +struct adf_hw_aram_info { + /* Inline Egress mask. "1" = AE is working with egress traffic */ + u32 inline_direction_egress_mask; + /* Inline congestion managmenet profiles set in config file */ + u32 inline_congest_mngt_profile; + /* Initialise CY AE mask, "1" = AE is used for CY operations */ + u32 cy_ae_mask; + /* Initialise DC AE mask, "1" = AE is used for DC operations */ + u32 dc_ae_mask; + /* Number of long words used to define the ARAM regions */ + u32 num_aram_lw_entries; + /* ARAM region definitions */ + u32 mmp_region_size; + u32 mmp_region_offset; + u32 skm_region_size; + u32 skm_region_offset; + /* + * Defines size and offset of compression intermediate buffers stored + * in ARAM (device's on-chip memory). + */ + u32 inter_buff_aram_region_size; + u32 inter_buff_aram_region_offset; + u32 sadb_region_size; + u32 sadb_region_offset; +} __packed; + +struct adf_hw_device_class { + const char *name; + const enum adf_device_type type; + uint32_t instances; +} __packed; + +struct arb_info { + u32 arbiter_offset; + u32 wrk_thd_2_srv_arb_map; + u32 wrk_cfg_offset; +} __packed; + +struct admin_info { + u32 admin_msg_ur; + u32 admin_msg_lr; + u32 mailbox_offset; +} __packed; + +struct adf_cfg_device_data; +struct adf_accel_dev; +struct adf_etr_data; +struct adf_etr_ring_data; + +struct adf_hw_device_data { + struct adf_hw_device_class *dev_class; + uint32_t (*get_accel_mask)(struct adf_accel_dev *accel_dev); + uint32_t (*get_ae_mask)(struct adf_accel_dev *accel_dev); + uint32_t (*get_sram_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_misc_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_etr_bar_id)(struct adf_hw_device_data *self); + uint32_t (*get_num_aes)(struct adf_hw_device_data *self); + uint32_t (*get_num_accels)(struct adf_hw_device_data *self); + void (*notify_and_wait_ethernet)(struct adf_accel_dev *accel_dev); + bool (*get_eth_doorbell_msg)(struct adf_accel_dev *accel_dev); + uint32_t (*get_pf2vf_offset)(uint32_t i); + uint32_t (*get_vintmsk_offset)(uint32_t i); + u32 (*get_vintsou_offset)(void); + void (*get_arb_info)(struct arb_info *arb_csrs_info); + void (*get_admin_info)(struct admin_info *admin_csrs_info); + void (*get_errsou_offset)(u32 *errsou3, u32 *errsou5); + uint32_t (*get_num_accel_units)(struct adf_hw_device_data *self); + int (*init_accel_units)(struct adf_accel_dev *accel_dev); + void (*exit_accel_units)(struct adf_accel_dev *accel_dev); + uint32_t (*get_clock_speed)(struct adf_hw_device_data *self); + enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self); + bool (*check_prod_sku)(struct adf_accel_dev *accel_dev); + int (*alloc_irq)(struct adf_accel_dev *accel_dev); + void (*free_irq)(struct adf_accel_dev *accel_dev); + void (*enable_error_correction)(struct adf_accel_dev *accel_dev); + int (*check_uncorrectable_error)(struct adf_accel_dev *accel_dev); + void (*print_err_registers)(struct adf_accel_dev *accel_dev); + void (*disable_error_interrupts)(struct adf_accel_dev *accel_dev); + int (*init_ras)(struct adf_accel_dev *accel_dev); + void (*exit_ras)(struct adf_accel_dev *accel_dev); + void (*disable_arb)(struct adf_accel_dev *accel_dev); + void (*update_ras_errors)(struct adf_accel_dev *accel_dev, int error); + bool (*ras_interrupts)(struct adf_accel_dev *accel_dev, + bool *reset_required); + int (*init_admin_comms)(struct adf_accel_dev *accel_dev); + void (*exit_admin_comms)(struct adf_accel_dev *accel_dev); + int (*send_admin_init)(struct adf_accel_dev *accel_dev); + void (*set_asym_rings_mask)(struct adf_accel_dev *accel_dev); + int (*get_ring_to_svc_map)(struct adf_accel_dev *accel_dev, + u16 *ring_to_svc_map); + uint32_t (*get_accel_cap)(struct adf_accel_dev *accel_dev); + int (*init_arb)(struct adf_accel_dev *accel_dev); + void (*exit_arb)(struct adf_accel_dev *accel_dev); + void (*get_arb_mapping)(struct adf_accel_dev *accel_dev, + const uint32_t **cfg); + int (*get_heartbeat_status)(struct adf_accel_dev *accel_dev); + uint32_t (*get_ae_clock)(struct adf_hw_device_data *self); + void (*disable_iov)(struct adf_accel_dev *accel_dev); + void (*configure_iov_threads)(struct adf_accel_dev *accel_dev, + bool enable); + void (*enable_ints)(struct adf_accel_dev *accel_dev); + bool (*check_slice_hang)(struct adf_accel_dev *accel_dev); + int (*set_ssm_wdtimer)(struct adf_accel_dev *accel_dev); + int (*enable_vf2pf_comms)(struct adf_accel_dev *accel_dev); + int (*disable_vf2pf_comms)(struct adf_accel_dev *accel_dev); + void (*reset_device)(struct adf_accel_dev *accel_dev); + void (*reset_hw_units)(struct adf_accel_dev *accel_dev); + int (*measure_clock)(struct adf_accel_dev *accel_dev); + void (*restore_device)(struct adf_accel_dev *accel_dev); + uint32_t (*get_obj_cfg_ae_mask)(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services); + int (*add_pke_stats)(struct adf_accel_dev *accel_dev); + void (*remove_pke_stats)(struct adf_accel_dev *accel_dev); + int (*add_misc_error)(struct adf_accel_dev *accel_dev); + int (*count_ras_event)(struct adf_accel_dev *accel_dev, + u32 *ras_event, + char *aeidstr); + void (*remove_misc_error)(struct adf_accel_dev *accel_dev); + int (*configure_accel_units)(struct adf_accel_dev *accel_dev); + uint32_t (*get_objs_num)(struct adf_accel_dev *accel_dev); + const char *(*get_obj_name)(struct adf_accel_dev *accel_dev, + enum adf_accel_unit_services services); + void (*pre_reset)(struct adf_accel_dev *accel_dev); + void (*post_reset)(struct adf_accel_dev *accel_dev); + const char *fw_name; + const char *fw_mmp_name; + bool reset_ack; + uint32_t fuses; + uint32_t accel_capabilities_mask; + uint32_t instance_id; + uint16_t accel_mask; + u32 aerucm_mask; + u32 ae_mask; + u32 service_mask; + uint16_t tx_rings_mask; + uint8_t tx_rx_gap; + uint8_t num_banks; + u8 num_rings_per_bank; + uint8_t num_accel; + uint8_t num_logical_accel; + uint8_t num_engines; + uint8_t min_iov_compat_ver; + int (*get_storage_enabled)(struct adf_accel_dev *accel_dev, + uint32_t *storage_enabled); + u8 query_storage_cap; + u32 clock_frequency; + u8 storage_enable; + u32 extended_dc_capabilities; + int (*config_device)(struct adf_accel_dev *accel_dev); + u16 asym_rings_mask; + int (*get_fw_image_type)(struct adf_accel_dev *accel_dev, + enum adf_cfg_fw_image_type *fw_image_type); + u16 ring_to_svc_map; +} __packed; + +/* helper enum for performing CSR operations */ +enum operation { + AND, + OR, +}; + +/* 32-bit CSR write macro */ +#define ADF_CSR_WR(csr_base, csr_offset, val) \ + bus_write_4(csr_base, csr_offset, val) + +/* 64-bit CSR write macro */ +#ifdef __x86_64__ +#define ADF_CSR_WR64(csr_base, csr_offset, val) \ + bus_write_8(csr_base, csr_offset, val) +#else +static __inline void +adf_csr_wr64(struct resource *csr_base, bus_size_t offset, uint64_t value) +{ + bus_write_4(csr_base, offset, (uint32_t)value); + bus_write_4(csr_base, offset + 4, (uint32_t)(value >> 32)); +} +#define ADF_CSR_WR64(csr_base, csr_offset, val) \ + adf_csr_wr64(csr_base, csr_offset, val) +#endif + +/* 32-bit CSR read macro */ +#define ADF_CSR_RD(csr_base, csr_offset) bus_read_4(csr_base, csr_offset) + +/* 64-bit CSR read macro */ +#ifdef __x86_64__ +#define ADF_CSR_RD64(csr_base, csr_offset) bus_read_8(csr_base, csr_offset) +#else +static __inline uint64_t +adf_csr_rd64(struct resource *csr_base, bus_size_t offset) +{ + return (((uint64_t)bus_read_4(csr_base, offset)) | + (((uint64_t)bus_read_4(csr_base, offset + 4)) << 32)); +} +#define ADF_CSR_RD64(csr_base, csr_offset) adf_csr_rd64(csr_base, csr_offset) +#endif + +#define GET_DEV(accel_dev) ((accel_dev)->accel_pci_dev.pci_dev) +#define GET_BARS(accel_dev) ((accel_dev)->accel_pci_dev.pci_bars) +#define GET_HW_DATA(accel_dev) (accel_dev->hw_device) +#define GET_MAX_BANKS(accel_dev) (GET_HW_DATA(accel_dev)->num_banks) +#define GET_DEV_SKU(accel_dev) (accel_dev->accel_pci_dev.sku) +#define GET_NUM_RINGS_PER_BANK(accel_dev) \ + (GET_HW_DATA(accel_dev)->num_rings_per_bank) +#define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines) +#define accel_to_pci_dev(accel_ptr) accel_ptr->accel_pci_dev.pci_dev +#define GET_SRV_TYPE(ena_srv_mask, srv) \ + (((ena_srv_mask) >> (ADF_SRV_TYPE_BIT_LEN * (srv))) & ADF_SRV_TYPE_MASK) +#define SET_ASYM_MASK(asym_mask, srv) \ + ({ \ + typeof(srv) srv_ = (srv); \ + (asym_mask) |= ((1 << (srv_)*ADF_RINGS_PER_SRV_TYPE) | \ + (1 << ((srv_)*ADF_RINGS_PER_SRV_TYPE + 1))); \ + }) + +#define GET_NUM_RINGS_PER_BANK(accel_dev) \ + (GET_HW_DATA(accel_dev)->num_rings_per_bank) +#define GET_MAX_PROCESSES(accel_dev) \ + ({ \ + typeof(accel_dev) dev = (accel_dev); \ + (GET_MAX_BANKS(dev) * (GET_NUM_RINGS_PER_BANK(dev) / 2)); \ + }) +#define GET_DU_TABLE(accel_dev) (accel_dev->du_table) + +static inline void +adf_csr_fetch_and_and(struct resource *csr, size_t offs, unsigned long mask) +{ + unsigned int val = ADF_CSR_RD(csr, offs); + + val &= mask; + ADF_CSR_WR(csr, offs, val); +} + +static inline void +adf_csr_fetch_and_or(struct resource *csr, size_t offs, unsigned long mask) +{ + unsigned int val = ADF_CSR_RD(csr, offs); + + val |= mask; + ADF_CSR_WR(csr, offs, val); +} + +static inline void +adf_csr_fetch_and_update(enum operation op, + struct resource *csr, + size_t offs, + unsigned long mask) +{ + switch (op) { + case AND: + adf_csr_fetch_and_and(csr, offs, mask); + break; + case OR: + adf_csr_fetch_and_or(csr, offs, mask); + break; + } +} + +struct pfvf_stats { + struct dentry *stats_file; + /* Messages put in CSR */ + unsigned int tx; + /* Messages read from CSR */ + unsigned int rx; + /* Interrupt fired but int bit was clear */ + unsigned int spurious; + /* Block messages sent */ + unsigned int blk_tx; + /* Block messages received */ + unsigned int blk_rx; + /* Blocks received with CRC errors */ + unsigned int crc_err; + /* CSR in use by other side */ + unsigned int busy; + /* Receiver did not acknowledge */ + unsigned int no_ack; + /* Collision detected */ + unsigned int collision; + /* Couldn't send a response */ + unsigned int tx_timeout; + /* Didn't receive a response */ + unsigned int rx_timeout; + /* Responses received */ + unsigned int rx_rsp; + /* Messages re-transmitted */ + unsigned int retry; + /* Event put timeout */ + unsigned int event_timeout; +}; + +#define NUM_PFVF_COUNTERS 14 + +void adf_get_admin_info(struct admin_info *admin_csrs_info); +struct adf_admin_comms { + bus_addr_t phy_addr; + bus_addr_t const_tbl_addr; + bus_addr_t aram_map_phys_addr; + bus_addr_t phy_hb_addr; + bus_dmamap_t aram_map; + bus_dmamap_t const_tbl_map; + bus_dmamap_t hb_map; + char *virt_addr; + char *virt_hb_addr; + struct resource *mailbox_addr; + struct sx lock; + struct bus_dmamem dma_mem; + struct bus_dmamem dma_hb; +}; + +struct icp_qat_fw_loader_handle; +struct adf_fw_loader_data { + struct icp_qat_fw_loader_handle *fw_loader; + const struct firmware *uof_fw; + const struct firmware *mmp_fw; +}; + +struct adf_accel_vf_info { + struct adf_accel_dev *accel_dev; + struct mutex pf2vf_lock; /* protect CSR access for PF2VF messages */ + u32 vf_nr; + bool init; + u8 compat_ver; + struct pfvf_stats pfvf_counters; +}; + +struct adf_fw_versions { + u8 fw_version_major; + u8 fw_version_minor; + u8 fw_version_patch; + u8 mmp_version_major; + u8 mmp_version_minor; + u8 mmp_version_patch; +}; + +#define ADF_COMPAT_CHECKER_MAX 8 +typedef int (*adf_iov_compat_checker_t)(struct adf_accel_dev *accel_dev, + u8 vf_compat_ver); +struct adf_accel_compat_manager { + u8 num_chker; + adf_iov_compat_checker_t iov_compat_checkers[ADF_COMPAT_CHECKER_MAX]; +}; + +struct adf_heartbeat; +struct adf_accel_dev { + struct adf_hw_aram_info *aram_info; + struct adf_accel_unit_info *au_info; + struct adf_etr_data *transport; + struct adf_hw_device_data *hw_device; + struct adf_cfg_device_data *cfg; + struct adf_fw_loader_data *fw_loader; + struct adf_admin_comms *admin; + struct adf_heartbeat *heartbeat; + struct adf_fw_versions fw_versions; + unsigned int autoreset_on_error; + struct adf_fw_counters_data *fw_counters_data; + struct sysctl_oid *debugfs_ae_config; + struct list_head crypto_list; + atomic_t *ras_counters; + unsigned long status; + atomic_t ref_count; + bus_dma_tag_t dma_tag; + struct sysctl_ctx_list sysctl_ctx; + struct sysctl_oid *ras_correctable; + struct sysctl_oid *ras_uncorrectable; + struct sysctl_oid *ras_fatal; + struct sysctl_oid *ras_reset; + struct sysctl_oid *pke_replay_dbgfile; + struct sysctl_oid *misc_error_dbgfile; + struct list_head list; + struct adf_accel_pci accel_pci_dev; + struct adf_accel_compat_manager *cm; + u8 compat_ver; + union { + struct { + /* vf_info is non-zero when SR-IOV is init'ed */ + struct adf_accel_vf_info *vf_info; + int num_vfs; + } pf; + struct { + struct resource *irq; + void *cookie; + char *irq_name; + struct task pf2vf_bh_tasklet; + struct mutex vf2pf_lock; /* protect CSR access */ + int iov_msg_completion; + uint8_t compatible; + uint8_t pf_version; + u8 pf2vf_block_byte; + u8 pf2vf_block_resp_type; + struct pfvf_stats pfvf_counters; + } vf; + } u1; + bool is_vf; + u32 accel_id; + void *lac_dev; +}; +#endif diff --git a/sys/dev/qat/include/common/adf_cfg.h b/sys/dev/qat/include/common/adf_cfg.h new file mode 100644 index 000000000000..edc4813cb69e --- /dev/null +++ b/sys/dev/qat/include/common/adf_cfg.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_H_ +#define ADF_CFG_H_ + +#include <linux/rwsem.h> +#include "adf_accel_devices.h" +#include "adf_cfg_common.h" +#include "adf_cfg_strings.h" + +struct adf_cfg_key_val { + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + enum adf_cfg_val_type type; + struct list_head list; +}; + +struct adf_cfg_section { + char name[ADF_CFG_MAX_SECTION_LEN_IN_BYTES]; + bool processed; + bool is_derived; + struct list_head list; + struct list_head param_head; +}; + +struct adf_cfg_device_data { + struct adf_cfg_device *dev; + struct list_head sec_list; + struct sysctl_oid *debug; + struct sx lock; +}; + +struct adf_cfg_depot_list { + struct list_head sec_list; +}; + +int adf_cfg_dev_add(struct adf_accel_dev *accel_dev); +void adf_cfg_dev_remove(struct adf_accel_dev *accel_dev); +int adf_cfg_depot_restore_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *dev_hp_cfg); +int adf_cfg_section_add(struct adf_accel_dev *accel_dev, const char *name); +void adf_cfg_del_all(struct adf_accel_dev *accel_dev); +void adf_cfg_depot_del_all(struct list_head *head); +int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key, + const void *val, + enum adf_cfg_val_type type); +int adf_cfg_get_param_value(struct adf_accel_dev *accel_dev, + const char *section, + const char *name, + char *value); +int adf_cfg_save_section(struct adf_accel_dev *accel_dev, + const char *name, + struct adf_cfg_section *section); +int adf_cfg_depot_save_all(struct adf_accel_dev *accel_dev, + struct adf_cfg_depot_list *dev_hp_cfg); +struct adf_cfg_section *adf_cfg_sec_find(struct adf_accel_dev *accel_dev, + const char *sec_name); +int adf_cfg_derived_section_add(struct adf_accel_dev *accel_dev, + const char *name); +int adf_cfg_remove_key_param(struct adf_accel_dev *accel_dev, + const char *section_name, + const char *key); +int adf_cfg_setup_irq(struct adf_accel_dev *accel_dev); +void adf_cfg_set_asym_rings_mask(struct adf_accel_dev *accel_dev); +void adf_cfg_gen_dispatch_arbiter(struct adf_accel_dev *accel_dev, + const u32 *thrd_to_arb_map, + u32 *thrd_to_arb_map_gen, + u32 total_engines); +int adf_cfg_get_fw_image_type(struct adf_accel_dev *accel_dev, + enum adf_cfg_fw_image_type *fw_image_type); +int adf_cfg_get_services_enabled(struct adf_accel_dev *accel_dev, + u16 *ring_to_svc_map); +int adf_cfg_restore_section(struct adf_accel_dev *accel_dev, + struct adf_cfg_section *section); +void adf_cfg_keyval_del_all(struct list_head *head); +#endif diff --git a/sys/dev/qat/include/common/adf_cfg_common.h b/sys/dev/qat/include/common/adf_cfg_common.h new file mode 100644 index 000000000000..68fb5e8a98b3 --- /dev/null +++ b/sys/dev/qat/include/common/adf_cfg_common.h @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_COMMON_H_ +#define ADF_CFG_COMMON_H_ + +#include <sys/types.h> +#include <sys/ioccom.h> +#include <sys/cpuset.h> + +#define ADF_CFG_MAX_STR_LEN 128 +#define ADF_CFG_MAX_KEY_LEN_IN_BYTES ADF_CFG_MAX_STR_LEN +/* + * Max value length increased to 128 to support more length of values. + * like Dc0CoreAffinity = 0, 1, 2,... config values to max cores + */ +#define ADF_CFG_MAX_VAL_LEN_IN_BYTES 128 +#define ADF_CFG_MAX_SECTION_LEN_IN_BYTES ADF_CFG_MAX_STR_LEN +#define ADF_CFG_NULL_TERM_SIZE 1 +#define ADF_CFG_BASE_DEC 10 +#define ADF_CFG_BASE_HEX 16 +#define ADF_CFG_ALL_DEVICES 0xFFFE +#define ADF_CFG_NO_DEVICE 0xFFFF +#define ADF_CFG_AFFINITY_WHATEVER 0xFF +#define MAX_DEVICE_NAME_SIZE 32 +#define ADF_MAX_DEVICES (32 * 32) +#define ADF_MAX_ACCELENGINES 12 +#define ADF_CFG_STORAGE_ENABLED 1 +#define ADF_DEVS_ARRAY_SIZE BITS_TO_LONGS(ADF_MAX_DEVICES) +#define ADF_SSM_WDT_PKE_DEFAULT_VALUE 0x3000000 +#define ADF_WDT_TIMER_SYM_COMP_MS 3 +#define ADF_MIN_HB_TIMER_MS 100 +#define ADF_CFG_MAX_NUM_OF_SECTIONS 16 +#define ADF_CFG_MAX_NUM_OF_TOKENS 16 +#define ADF_CFG_MAX_TOKENS_IN_CONFIG 8 +#define ADF_CFG_RESP_POLL 1 +#define ADF_CFG_RESP_EPOLL 2 +#define ADF_CFG_DEF_CY_RING_ASYM_SIZE 64 +#define ADF_CFG_DEF_CY_RING_SYM_SIZE 512 +#define ADF_CFG_DEF_DC_RING_SIZE 512 +#define ADF_CFG_MAX_CORE_NUM 256 +#define ADF_CFG_MAX_TOKENS ADF_CFG_MAX_CORE_NUM +#define ADF_CFG_MAX_TOKEN_LEN 10 +#define ADF_CFG_ACCEL_DEF_COALES 1 +#define ADF_CFG_ACCEL_DEF_COALES_TIMER 10000 +#define ADF_CFG_ACCEL_DEF_COALES_NUM_MSG 0 +#define ADF_CFG_ASYM_SRV_MASK 1 +#define ADF_CFG_SYM_SRV_MASK 2 +#define ADF_CFG_DC_SRV_MASK 8 +#define ADF_CFG_UNKNOWN_SRV_MASK 0 +#define ADF_CFG_DEF_ASYM_MASK 0x03 +#define ADF_CFG_MAX_SERVICES 4 +#define ADF_MAX_SERVICES 3 + +enum adf_svc_type { + ADF_SVC_ASYM = 0, + ADF_SVC_SYM = 1, + ADF_SVC_DC = 2, + ADF_SVC_NONE = 3 +}; + +struct adf_pci_address { + unsigned char bus; + unsigned char dev; + unsigned char func; +} __packed; + +#define ADF_CFG_SERV_RING_PAIR_0_SHIFT 0 +#define ADF_CFG_SERV_RING_PAIR_1_SHIFT 3 +#define ADF_CFG_SERV_RING_PAIR_2_SHIFT 6 +#define ADF_CFG_SERV_RING_PAIR_3_SHIFT 9 + +enum adf_cfg_service_type { NA = 0, CRYPTO, COMP, SYM, ASYM, USED }; + +enum adf_cfg_bundle_type { FREE, KERNEL, USER }; + +enum adf_cfg_val_type { ADF_DEC, ADF_HEX, ADF_STR }; + +enum adf_device_type { + DEV_UNKNOWN = 0, + DEV_DH895XCC, + DEV_DH895XCCVF, + DEV_C62X, + DEV_C62XVF, + DEV_C3XXX, + DEV_C3XXXVF, + DEV_200XX, + DEV_200XXVF, + DEV_C4XXX, + DEV_C4XXXVF +}; + +enum adf_cfg_fw_image_type { + ADF_FW_IMAGE_DEFAULT = 0, + ADF_FW_IMAGE_CRYPTO, + ADF_FW_IMAGE_COMPRESSION, + ADF_FW_IMAGE_CUSTOM1 +}; + +struct adf_dev_status_info { + enum adf_device_type type; + uint16_t accel_id; + uint16_t instance_id; + uint8_t num_ae; + uint8_t num_accel; + uint8_t num_logical_accel; + uint8_t banks_per_accel; + uint8_t state; + uint8_t bus; + uint8_t dev; + uint8_t fun; + int domain; + char name[MAX_DEVICE_NAME_SIZE]; + u8 sku; + u32 node_id; + u32 device_mem_available; + u32 pci_device_id; +}; + +struct adf_cfg_device { + /* contains all the bundles info */ + struct adf_cfg_bundle **bundles; + /* contains all the instances info */ + struct adf_cfg_instance **instances; + int bundle_num; + int instance_index; + char name[ADF_CFG_MAX_STR_LEN]; + int dev_id; + int max_kernel_bundle_nr; + u16 total_num_inst; +}; + +enum adf_accel_serv_type { + ADF_ACCEL_SERV_NA = 0x0, + ADF_ACCEL_SERV_ASYM, + ADF_ACCEL_SERV_SYM, + ADF_ACCEL_SERV_RND, + ADF_ACCEL_SERV_DC +}; + +struct adf_cfg_ring { + u8 mode : 1; + enum adf_accel_serv_type serv_type; + u8 number : 4; +}; + +struct adf_cfg_bundle { + /* Section(s) name this bundle is shared by */ + char **sections; + int max_section; + int section_index; + int number; + enum adf_cfg_bundle_type type; + cpuset_t affinity_mask; + int polling_mode; + int instance_num; + int num_of_rings; + /* contains all the info about rings */ + struct adf_cfg_ring **rings; + u16 in_use; +}; + +struct adf_cfg_instance { + enum adf_cfg_service_type stype; + char name[ADF_CFG_MAX_STR_LEN]; + int polling_mode; + cpuset_t affinity_mask; + /* rings within an instance for services */ + int asym_tx; + int asym_rx; + int sym_tx; + int sym_rx; + int dc_tx; + int dc_rx; + int bundle; +}; + +#define ADF_CFG_MAX_CORE_NUM 256 +#define ADF_CFG_MAX_TOKENS_IN_CONFIG 8 +#define ADF_CFG_MAX_TOKEN_LEN 10 +#define ADF_CFG_MAX_TOKENS ADF_CFG_MAX_CORE_NUM +#define ADF_CFG_ACCEL_DEF_COALES 1 +#define ADF_CFG_ACCEL_DEF_COALES_TIMER 10000 +#define ADF_CFG_ACCEL_DEF_COALES_NUM_MSG 0 +#define ADF_CFG_RESP_EPOLL 2 +#define ADF_CFG_SERV_RING_PAIR_1_SHIFT 3 +#define ADF_CFG_SERV_RING_PAIR_2_SHIFT 6 +#define ADF_CFG_SERV_RING_PAIR_3_SHIFT 9 +#define ADF_CFG_RESP_POLL 1 +#define ADF_CFG_ASYM_SRV_MASK 1 +#define ADF_CFG_SYM_SRV_MASK 2 +#define ADF_CFG_DC_SRV_MASK 8 +#define ADF_CFG_UNKNOWN_SRV_MASK 0 +#define ADF_CFG_DEF_ASYM_MASK 0x03 +#define ADF_CFG_MAX_SERVICES 4 + +#define ADF_CFG_HB_DEFAULT_VALUE 500 +#define ADF_CFG_HB_COUNT_THRESHOLD 3 +#define ADF_MIN_HB_TIMER_MS 100 + +enum adf_device_heartbeat_status { + DEV_HB_UNRESPONSIVE = 0, + DEV_HB_ALIVE, + DEV_HB_UNSUPPORTED +}; + +struct adf_dev_heartbeat_status_ctl { + uint16_t device_id; + enum adf_device_heartbeat_status status; +}; +#endif diff --git a/sys/dev/qat/include/common/adf_cfg_strings.h b/sys/dev/qat/include/common/adf_cfg_strings.h new file mode 100644 index 000000000000..2f05dadadc45 --- /dev/null +++ b/sys/dev/qat/include/common/adf_cfg_strings.h @@ -0,0 +1,132 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_STRINGS_H_ +#define ADF_CFG_STRINGS_H_ + +#define ADF_GENERAL_SEC "GENERAL" +#define ADF_KERNEL_SEC "KERNEL" +#define ADF_ACCEL_SEC "Accelerator" +#define ADF_NUM_CY "NumberCyInstances" +#define ADF_NUM_DC "NumberDcInstances" +#define ADF_RING_SYM_SIZE "NumConcurrentSymRequests" +#define ADF_RING_ASYM_SIZE "NumConcurrentAsymRequests" +#define ADF_RING_DC_SIZE "NumConcurrentRequests" +#define ADF_RING_ASYM_TX "RingAsymTx" +#define ADF_RING_SYM_TX "RingSymTx" +#define ADF_RING_RND_TX "RingNrbgTx" +#define ADF_RING_ASYM_RX "RingAsymRx" +#define ADF_RING_SYM_RX "RingSymRx" +#define ADF_RING_RND_RX "RingNrbgRx" +#define ADF_RING_DC_TX "RingTx" +#define ADF_RING_DC_RX "RingRx" +#define ADF_ETRMGR_BANK "Bank" +#define ADF_RING_BANK_NUM "BankNumber" +#define ADF_CY "Cy" +#define ADF_DC "Dc" +#define ADF_DC_EXTENDED_FEATURES "Device_DcExtendedFeatures" +#define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled" +#define ADF_ETRMGR_COALESCING_ENABLED_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCING_ENABLED +#define ADF_ETRMGR_COALESCE_TIMER "InterruptCoalescingTimerNs" +#define ADF_ETRMGR_COALESCE_TIMER_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCE_TIMER +#define ADF_ETRMGR_COALESCING_MSG_ENABLED "InterruptCoalescingNumResponses" +#define ADF_ETRMGR_COALESCING_MSG_ENABLED_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCING_MSG_ENABLED +#define ADF_ETRMGR_CORE_AFFINITY "CoreAffinity" +#define ADF_ETRMGR_CORE_AFFINITY_FORMAT \ + ADF_ETRMGR_BANK "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_ACCEL_STR "Accelerator%d" +#define ADF_INLINE_SEC "INLINE" +#define ADF_NUM_CY_ACCEL_UNITS "NumCyAccelUnits" +#define ADF_NUM_DC_ACCEL_UNITS "NumDcAccelUnits" +#define ADF_NUM_INLINE_ACCEL_UNITS "NumInlineAccelUnits" +#define ADF_INLINE_INGRESS "InlineIngress" +#define ADF_INLINE_EGRESS "InlineEgress" +#define ADF_INLINE_CONGEST_MNGT_PROFILE "InlineCongestionManagmentProfile" +#define ADF_INLINE_IPSEC_ALGO_GROUP "InlineIPsecAlgoGroup" +#define ADF_SERVICE_CY "cy" +#define ADF_SERVICE_SYM "sym" +#define ADF_SERVICE_DC "dc" +#define ADF_CFG_CY "cy" +#define ADF_CFG_DC "dc" +#define ADF_CFG_ASYM "asym" +#define ADF_CFG_SYM "sym" +#define ADF_SERVICE_INLINE "inline" +#define ADF_SERVICES_ENABLED "ServicesEnabled" +#define ADF_SERVICES_SEPARATOR ";" + +#define ADF_DEV_SSM_WDT_BULK "CySymAndDcWatchDogTimer" +#define ADF_DEV_SSM_WDT_PKE "CyAsymWatchDogTimer" +#define ADF_DH895XCC_AE_FW_NAME "icp_qat_ae.uof" +#define ADF_CXXX_AE_FW_NAME "icp_qat_ae.suof" +#define ADF_HEARTBEAT_TIMER "HeartbeatTimer" +#define ADF_MMP_VER_KEY "Firmware_MmpVer" +#define ADF_UOF_VER_KEY "Firmware_UofVer" +#define ADF_HW_REV_ID_KEY "HW_RevId" +#define ADF_STORAGE_FIRMWARE_ENABLED "StorageEnabled" +#define ADF_DEV_MAX_BANKS "Device_Max_Banks" +#define ADF_DEV_CAPABILITIES_MASK "Device_Capabilities_Mask" +#define ADF_DEV_NODE_ID "Device_NodeId" +#define ADF_DEV_PKG_ID "Device_PkgId" +#define ADF_FIRST_USER_BUNDLE "FirstUserBundle" +#define ADF_INTERNAL_USERSPACE_SEC_SUFF "_INT_" +#define ADF_LIMIT_DEV_ACCESS "LimitDevAccess" +#define DEV_LIMIT_CFG_ACCESS_TMPL "_D_L_ACC" +#define ADF_DEV_MAX_RINGS_PER_BANK "Device_Max_Rings_Per_Bank" +#define ADF_NUM_PROCESSES "NumProcesses" +#define ADF_DH895XCC_AE_FW_NAME_COMPRESSION "compression.uof" +#define ADF_DH895XCC_AE_FW_NAME_CRYPTO "crypto.uof" +#define ADF_DH895XCC_AE_FW_NAME_CUSTOM1 "custom1.uof" +#define ADF_CXXX_AE_FW_NAME_COMPRESSION "compression.suof" +#define ADF_CXXX_AE_FW_NAME_CRYPTO "crypto.suof" +#define ADF_CXXX_AE_FW_NAME_CUSTOM1 "custom1.suof" +#define ADF_DC_EXTENDED_FEATURES "Device_DcExtendedFeatures" +#define ADF_PKE_DISABLED "PkeServiceDisabled" +#define ADF_INTER_BUF_SIZE "DcIntermediateBufferSizeInKB" +#define ADF_AUTO_RESET_ON_ERROR "AutoResetOnError" +#define ADF_KERNEL_SAL_SEC "KERNEL_QAT" +#define ADF_CFG_DEF_CY_RING_ASYM_SIZE 64 +#define ADF_CFG_DEF_CY_RING_SYM_SIZE 512 +#define ADF_CFG_DEF_DC_RING_SIZE 512 +#define ADF_NUM_PROCESSES "NumProcesses" +#define ADF_SERVICES_ENABLED "ServicesEnabled" +#define ADF_CFG_CY "cy" +#define ADF_CFG_SYM "sym" +#define ADF_CFG_ASYM "asym" +#define ADF_CFG_DC "dc" +#define ADF_POLL_MODE "IsPolled" +#define ADF_DEV_KPT_ENABLE "KptEnabled" +#define ADF_STORAGE_FIRMWARE_ENABLED "StorageEnabled" +#define ADF_RL_FIRMWARE_ENABLED "RateLimitingEnabled" +#define ADF_SERVICES_PROFILE "ServicesProfile" +#define ADF_SERVICES_DEFAULT "DEFAULT" +#define ADF_SERVICES_CRYPTO "CRYPTO" +#define ADF_SERVICES_COMPRESSION "COMPRESSION" +#define ADF_SERVICES_CUSTOM1 "CUSTOM1" + +#define ADF_DC_RING_SIZE (ADF_DC ADF_RING_DC_SIZE) +#define ADF_CY_RING_SYM_SIZE (ADF_CY ADF_RING_SYM_SIZE) +#define ADF_CY_RING_ASYM_SIZE (ADF_CY ADF_RING_ASYM_SIZE) +#define ADF_CY_CORE_AFFINITY_FORMAT ADF_CY "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_DC_CORE_AFFINITY_FORMAT ADF_DC "%d" ADF_ETRMGR_CORE_AFFINITY +#define ADF_CY_BANK_NUM_FORMAT ADF_CY "%d" ADF_RING_BANK_NUM +#define ADF_DC_BANK_NUM_FORMAT ADF_DC "%d" ADF_RING_BANK_NUM +#define ADF_CY_ASYM_TX_FORMAT ADF_CY "%d" ADF_RING_ASYM_TX +#define ADF_CY_SYM_TX_FORMAT ADF_CY "%d" ADF_RING_SYM_TX +#define ADF_CY_ASYM_RX_FORMAT ADF_CY "%d" ADF_RING_ASYM_RX +#define ADF_CY_SYM_RX_FORMAT ADF_CY "%d" ADF_RING_SYM_RX +#define ADF_DC_TX_FORMAT ADF_DC "%d" ADF_RING_DC_TX +#define ADF_DC_RX_FORMAT ADF_DC "%d" ADF_RING_DC_RX +#define ADF_CY_RING_SYM_SIZE_FORMAT ADF_CY "%d" ADF_RING_SYM_SIZE +#define ADF_CY_RING_ASYM_SIZE_FORMAT ADF_CY "%d" ADF_RING_ASYM_SIZE +#define ADF_DC_RING_SIZE_FORMAT ADF_DC "%d" ADF_RING_DC_SIZE +#define ADF_CY_NAME_FORMAT ADF_CY "%dName" +#define ADF_DC_NAME_FORMAT ADF_DC "%dName" +#define ADF_CY_POLL_MODE_FORMAT ADF_CY "%d" ADF_POLL_MODE +#define ADF_DC_POLL_MODE_FORMAT ADF_DC "%d" ADF_POLL_MODE +#define ADF_USER_SECTION_NAME_FORMAT "%s_INT_%d" +#define ADF_LIMITED_USER_SECTION_NAME_FORMAT "%s_DEV%d_INT_%d" +#define ADF_CONFIG_VERSION "ConfigVersion" +#endif diff --git a/sys/dev/qat/include/common/adf_cfg_user.h b/sys/dev/qat/include/common/adf_cfg_user.h new file mode 100644 index 000000000000..910b0ea51465 --- /dev/null +++ b/sys/dev/qat/include/common/adf_cfg_user.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_CFG_USER_H_ +#define ADF_CFG_USER_H_ + +#include "adf_cfg_common.h" +#include "adf_cfg_strings.h" + +struct adf_user_cfg_key_val { + char key[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + union { + struct adf_user_cfg_key_val *next; + uint64_t padding3; + }; + enum adf_cfg_val_type type; +}; + +struct adf_user_cfg_section { + char name[ADF_CFG_MAX_SECTION_LEN_IN_BYTES]; + union { + struct adf_user_cfg_key_val *params; + uint64_t padding1; + }; + union { + struct adf_user_cfg_section *next; + uint64_t padding3; + }; +}; + +struct adf_user_cfg_ctl_data { + union { + struct adf_user_cfg_section *config_section; + uint64_t padding; + }; + u32 device_id; +}; + +struct adf_user_reserve_ring { + u32 accel_id; + u32 bank_nr; + u32 ring_mask; +}; + +#endif diff --git a/sys/dev/qat/include/common/adf_common_drv.h b/sys/dev/qat/include/common/adf_common_drv.h new file mode 100644 index 000000000000..3bb35ed55da3 --- /dev/null +++ b/sys/dev/qat/include/common/adf_common_drv.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_DRV_H +#define ADF_DRV_H + +#include <dev/pci/pcivar.h> +#include "adf_accel_devices.h" +#include "icp_qat_fw_loader_handle.h" +#include "icp_qat_hal.h" +#include "adf_cfg_user.h" + +#define ADF_MAJOR_VERSION 0 +#define ADF_MINOR_VERSION 6 +#define ADF_BUILD_VERSION 0 +#define ADF_DRV_VERSION \ + __stringify(ADF_MAJOR_VERSION) "." __stringify( \ + ADF_MINOR_VERSION) "." __stringify(ADF_BUILD_VERSION) + +#define ADF_STATUS_RESTARTING 0 +#define ADF_STATUS_STARTING 1 +#define ADF_STATUS_CONFIGURED 2 +#define ADF_STATUS_STARTED 3 +#define ADF_STATUS_AE_INITIALISED 4 +#define ADF_STATUS_AE_UCODE_LOADED 5 +#define ADF_STATUS_AE_STARTED 6 +#define ADF_STATUS_PF_RUNNING 7 +#define ADF_STATUS_IRQ_ALLOCATED 8 +#define ADF_PCIE_FLR_ATTEMPT 10 +#define ADF_STATUS_SYSCTL_CTX_INITIALISED 9 + +#define PCI_EXP_AERUCS 0x104 + +/* PMISC BAR upper and lower offsets in PCIe config space */ +#define ADF_PMISC_L_OFFSET 0x18 +#define ADF_PMISC_U_OFFSET 0x1c + +enum adf_dev_reset_mode { ADF_DEV_RESET_ASYNC = 0, ADF_DEV_RESET_SYNC }; + +enum adf_event { + ADF_EVENT_INIT = 0, + ADF_EVENT_START, + ADF_EVENT_STOP, + ADF_EVENT_SHUTDOWN, + ADF_EVENT_RESTARTING, + ADF_EVENT_RESTARTED, + ADF_EVENT_ERROR, +}; + +struct adf_state { + enum adf_event dev_state; + int dev_id; +}; + +struct service_hndl { + int (*event_hld)(struct adf_accel_dev *accel_dev, enum adf_event event); + unsigned long init_status[ADF_DEVS_ARRAY_SIZE]; + unsigned long start_status[ADF_DEVS_ARRAY_SIZE]; + char *name; + struct list_head list; +}; + +static inline int +get_current_node(void) +{ + return PCPU_GET(domain); +} + +int adf_service_register(struct service_hndl *service); +int adf_service_unregister(struct service_hndl *service); + +int adf_dev_init(struct adf_accel_dev *accel_dev); +int adf_dev_start(struct adf_accel_dev *accel_dev); +int adf_dev_stop(struct adf_accel_dev *accel_dev); +void adf_dev_shutdown(struct adf_accel_dev *accel_dev); +int adf_dev_autoreset(struct adf_accel_dev *accel_dev); +int adf_dev_reset(struct adf_accel_dev *accel_dev, + enum adf_dev_reset_mode mode); +int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev, + enum adf_dev_reset_mode mode); +void adf_error_notifier(uintptr_t arg); +int adf_init_fatal_error_wq(void); +void adf_exit_fatal_error_wq(void); +int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr); +int adf_iov_notify(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr); +void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev); +int adf_notify_fatal_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_uncorrectable_error(struct adf_accel_dev *accel_dev); +void adf_pf2vf_notify_heartbeat_error(struct adf_accel_dev *accel_dev); +typedef int (*adf_iov_block_provider)(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num); +int adf_iov_block_provider_register(u8 block_type, + const adf_iov_block_provider provider); +u8 adf_iov_is_block_provider_registered(u8 block_type); +int adf_iov_block_provider_unregister(u8 block_type, + const adf_iov_block_provider provider); +int adf_iov_block_get(struct adf_accel_dev *accel_dev, + u8 block_type, + u8 *block_version, + u8 *buffer, + u8 *length); +u8 adf_pfvf_crc(u8 start_crc, u8 *buf, u8 len); +int adf_iov_init_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm); +int adf_iov_shutdown_compat_manager(struct adf_accel_dev *accel_dev, + struct adf_accel_compat_manager **cm); +int adf_iov_register_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc); +int adf_iov_unregister_compat_checker(struct adf_accel_dev *accel_dev, + const adf_iov_compat_checker_t cc); +int adf_pf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_pf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_enable_vf2pf_comms(struct adf_accel_dev *accel_dev); +int adf_disable_vf2pf_comms(struct adf_accel_dev *accel_dev); +void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info); +void adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data); +void adf_clean_vf_map(bool); +int adf_sysctl_add_fw_versions(struct adf_accel_dev *accel_dev); +int adf_sysctl_remove_fw_versions(struct adf_accel_dev *accel_dev); + +int adf_ctl_dev_register(void); +void adf_ctl_dev_unregister(void); +int adf_pf_vf_capabilities_init(struct adf_accel_dev *accel_dev); +int adf_pf_ext_dc_cap_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility); +int adf_pf_vf_ring_to_svc_init(struct adf_accel_dev *accel_dev); +int adf_pf_ring_to_svc_msg_provider(struct adf_accel_dev *accel_dev, + u8 **buffer, + u8 *length, + u8 *block_version, + u8 compatibility, + u8 byte_num); +int adf_devmgr_add_dev(struct adf_accel_dev *accel_dev, + struct adf_accel_dev *pf); +void adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev, + struct adf_accel_dev *pf); +struct list_head *adf_devmgr_get_head(void); +struct adf_accel_dev *adf_devmgr_get_dev_by_id(uint32_t id); +struct adf_accel_dev *adf_devmgr_get_first(void); +struct adf_accel_dev *adf_devmgr_pci_to_accel_dev(device_t pci_dev); +int adf_devmgr_verify_id(uint32_t *id); +void adf_devmgr_get_num_dev(uint32_t *num); +int adf_devmgr_in_reset(struct adf_accel_dev *accel_dev); +int adf_dev_started(struct adf_accel_dev *accel_dev); +int adf_dev_restarting_notify(struct adf_accel_dev *accel_dev); +int adf_dev_restarting_notify_sync(struct adf_accel_dev *accel_dev); +int adf_dev_restarted_notify(struct adf_accel_dev *accel_dev); +int adf_dev_stop_notify_sync(struct adf_accel_dev *accel_dev); +int adf_ae_init(struct adf_accel_dev *accel_dev); +int adf_ae_shutdown(struct adf_accel_dev *accel_dev); +int adf_ae_fw_load(struct adf_accel_dev *accel_dev); +void adf_ae_fw_release(struct adf_accel_dev *accel_dev); +int adf_ae_start(struct adf_accel_dev *accel_dev); +int adf_ae_stop(struct adf_accel_dev *accel_dev); + +int adf_aer_store_ppaerucm_reg(device_t pdev, + struct adf_hw_device_data *hw_data); + +int adf_enable_aer(struct adf_accel_dev *accel_dev, device_t *adf); +void adf_disable_aer(struct adf_accel_dev *accel_dev); +void adf_reset_sbr(struct adf_accel_dev *accel_dev); +void adf_reset_flr(struct adf_accel_dev *accel_dev); +void adf_dev_pre_reset(struct adf_accel_dev *accel_dev); +void adf_dev_post_reset(struct adf_accel_dev *accel_dev); +void adf_dev_restore(struct adf_accel_dev *accel_dev); +int adf_init_aer(void); +void adf_exit_aer(void); +int adf_put_admin_msg_sync(struct adf_accel_dev *accel_dev, + u32 ae, + void *in, + void *out); +struct icp_qat_fw_init_admin_req; +struct icp_qat_fw_init_admin_resp; +int adf_send_admin(struct adf_accel_dev *accel_dev, + struct icp_qat_fw_init_admin_req *req, + struct icp_qat_fw_init_admin_resp *resp, + u32 ae_mask); +int adf_config_device(struct adf_accel_dev *accel_dev); + +int adf_init_admin_comms(struct adf_accel_dev *accel_dev); +void adf_exit_admin_comms(struct adf_accel_dev *accel_dev); +int adf_send_admin_init(struct adf_accel_dev *accel_dev); +int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp); +int adf_get_fw_pke_stats(struct adf_accel_dev *accel_dev, + u64 *suc_count, + u64 *unsuc_count); +int adf_dev_measure_clock(struct adf_accel_dev *accel_dev, + u32 *frequency, + u32 min, + u32 max); +int adf_clock_debugfs_add(struct adf_accel_dev *accel_dev); +u64 adf_clock_get_current_time(void); +int adf_init_arb(struct adf_accel_dev *accel_dev); +int adf_init_gen2_arb(struct adf_accel_dev *accel_dev); +void adf_exit_arb(struct adf_accel_dev *accel_dev); +void adf_disable_arb(struct adf_accel_dev *accel_dev); +void adf_update_ring_arb(struct adf_etr_ring_data *ring); +void +adf_enable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask); +void +adf_disable_ring_arb(void *csr_addr, unsigned int bank_nr, unsigned int mask); +int adf_set_ssm_wdtimer(struct adf_accel_dev *accel_dev); +struct adf_accel_dev *adf_devmgr_get_dev_by_bdf(struct adf_pci_address *addr); +struct adf_accel_dev *adf_devmgr_get_dev_by_pci_bus(u8 bus); +int adf_get_vf_nr(struct adf_pci_address *vf_pci_addr, int *vf_nr); +u32 adf_get_slices_for_svc(struct adf_accel_dev *accel_dev, + enum adf_svc_type svc); +bool adf_is_bdf_equal(struct adf_pci_address *bdf1, + struct adf_pci_address *bdf2); +int adf_is_vf_nr_valid(struct adf_accel_dev *accel_dev, int vf_nr); +void adf_dev_get(struct adf_accel_dev *accel_dev); +void adf_dev_put(struct adf_accel_dev *accel_dev); +int adf_dev_in_use(struct adf_accel_dev *accel_dev); +int adf_init_etr_data(struct adf_accel_dev *accel_dev); +void adf_cleanup_etr_data(struct adf_accel_dev *accel_dev); + +struct qat_crypto_instance *qat_crypto_get_instance_node(int node); +void qat_crypto_put_instance(struct qat_crypto_instance *inst); +void qat_alg_callback(void *resp); +void qat_alg_asym_callback(void *resp); +int qat_algs_register(void); +void qat_algs_unregister(void); +int qat_asym_algs_register(void); +void qat_asym_algs_unregister(void); + +int adf_isr_resource_alloc(struct adf_accel_dev *accel_dev); +void adf_isr_resource_free(struct adf_accel_dev *accel_dev); +int adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev); +void adf_vf_isr_resource_free(struct adf_accel_dev *accel_dev); + +int qat_hal_init(struct adf_accel_dev *accel_dev); +void qat_hal_deinit(struct icp_qat_fw_loader_handle *handle); +void qat_hal_start(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +void qat_hal_stop(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +void qat_hal_reset(struct icp_qat_fw_loader_handle *handle); +int qat_hal_clr_reset(struct icp_qat_fw_loader_handle *handle); +void qat_hal_set_live_ctx(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask); +int qat_hal_check_ae_active(struct icp_qat_fw_loader_handle *handle, + unsigned int ae); +int qat_hal_set_ae_lm_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + enum icp_qat_uof_regtype lm_type, + unsigned char mode); +void qat_hal_set_ae_tindex_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +void qat_hal_set_ae_scs_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +int qat_hal_set_ae_ctx_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +int qat_hal_set_ae_nn_mode(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned char mode); +void qat_hal_set_pc(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int ctx_mask, + unsigned int upc); +void qat_hal_wr_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + const uint64_t *uword); +void qat_hal_wr_coalesce_uwords(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uaddr, + unsigned int words_num, + uint64_t *uword); + +void qat_hal_wr_umem(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned int uword_addr, + unsigned int words_num, + unsigned int *data); +int qat_hal_get_ins_num(void); +int qat_hal_batch_wr_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + struct icp_qat_uof_batch_init *lm_init_header); +int qat_hal_init_gpr(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_wr_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_rd_xfer(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + enum icp_qat_uof_regtype reg_type, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_init_nn(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned long ctx_mask, + unsigned short reg_num, + unsigned int regdata); +int qat_hal_wr_lm(struct icp_qat_fw_loader_handle *handle, + unsigned char ae, + unsigned short lm_addr, + unsigned int value); +int qat_uclo_wr_all_uimage(struct icp_qat_fw_loader_handle *handle); +void qat_uclo_del_obj(struct icp_qat_fw_loader_handle *handle); +void qat_uclo_del_mof(struct icp_qat_fw_loader_handle *handle); +int qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + int mem_size); +int qat_uclo_map_obj(struct icp_qat_fw_loader_handle *handle, + const void *addr_ptr, + u32 mem_size, + const char *obj_name); + +void qat_hal_get_scs_neigh_ae(unsigned char ae, unsigned char *ae_neigh); +int qat_uclo_set_cfg_ae_mask(struct icp_qat_fw_loader_handle *handle, + unsigned int cfg_ae_mask); +void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); +void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev); +int adf_init_vf_wq(void); +void adf_exit_vf_wq(void); +void adf_flush_vf_wq(void); +int adf_vf2pf_init(struct adf_accel_dev *accel_dev); +void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev); +static inline int +adf_sriov_configure(device_t *pdev, int numvfs) +{ + return 0; +} + +static inline void +adf_disable_sriov(struct adf_accel_dev *accel_dev) +{ +} + +static inline void +adf_vf2pf_handler(struct adf_accel_vf_info *vf_info) +{ +} + +static inline int +adf_init_pf_wq(void) +{ + return 0; +} + +static inline void +adf_exit_pf_wq(void) +{ +} +#endif diff --git a/sys/dev/qat/include/common/adf_transport.h b/sys/dev/qat/include/common/adf_transport.h new file mode 100644 index 000000000000..def448cc4ab1 --- /dev/null +++ b/sys/dev/qat/include/common/adf_transport.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_H +#define ADF_TRANSPORT_H + +#include "adf_accel_devices.h" + +struct adf_etr_ring_data; + +typedef void (*adf_callback_fn)(void *resp_msg); + +int adf_create_ring(struct adf_accel_dev *accel_dev, + const char *section, + u32 bank_num, + u32 num_mgs, + u32 msg_size, + const char *ring_name, + adf_callback_fn callback, + int poll_mode, + struct adf_etr_ring_data **ring_ptr); + +int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg); +void adf_remove_ring(struct adf_etr_ring_data *ring); +int adf_poll_bank(u32 accel_id, u32 bank_num, u32 quota); +int adf_poll_all_banks(u32 accel_id, u32 quota); +#endif /* ADF_TRANSPORT_H */ diff --git a/sys/dev/qat/include/common/adf_transport_access_macros.h b/sys/dev/qat/include/common/adf_transport_access_macros.h new file mode 100644 index 000000000000..ad9f0348b5a3 --- /dev/null +++ b/sys/dev/qat/include/common/adf_transport_access_macros.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_ACCESS_MACROS_H +#define ADF_TRANSPORT_ACCESS_MACROS_H + +#include "adf_accel_devices.h" +#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL +#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL +#define ADF_BANK_INT_FLAG_CLEAR_MASK 0xFFFF +#define ADF_RING_CSR_RING_CONFIG 0x000 +#define ADF_RING_CSR_RING_LBASE 0x040 +#define ADF_RING_CSR_RING_UBASE 0x080 +#define ADF_RING_CSR_RING_HEAD 0x0C0 +#define ADF_RING_CSR_RING_TAIL 0x100 +#define ADF_RING_CSR_E_STAT 0x14C +#define ADF_RING_CSR_INT_FLAG 0x170 +#define ADF_RING_CSR_INT_SRCSEL 0x174 +#define ADF_RING_CSR_INT_SRCSEL_2 0x178 +#define ADF_RING_CSR_INT_COL_EN 0x17C +#define ADF_RING_CSR_INT_COL_CTL 0x180 +#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184 +#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000 +#define ADF_RING_BUNDLE_SIZE 0x1000 +#define ADF_RING_CONFIG_NEAR_FULL_WM 0x0A +#define ADF_RING_CONFIG_NEAR_EMPTY_WM 0x05 +#define ADF_COALESCING_MIN_TIME 0x1FF +#define ADF_COALESCING_MAX_TIME 0xFFFFF +#define ADF_COALESCING_DEF_TIME 0x27FF +#define ADF_RING_NEAR_WATERMARK_512 0x08 +#define ADF_RING_NEAR_WATERMARK_0 0x00 +#define ADF_RING_EMPTY_SIG 0x7F7F7F7F + +/* Valid internal ring size values */ +#define ADF_RING_SIZE_128 0x01 +#define ADF_RING_SIZE_256 0x02 +#define ADF_RING_SIZE_512 0x03 +#define ADF_RING_SIZE_4K 0x06 +#define ADF_RING_SIZE_16K 0x08 +#define ADF_RING_SIZE_4M 0x10 +#define ADF_MIN_RING_SIZE ADF_RING_SIZE_128 +#define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M +#define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K + +/* Valid internal msg size values */ +#define ADF_MSG_SIZE_32 0x01 +#define ADF_MSG_SIZE_64 0x02 +#define ADF_MSG_SIZE_128 0x04 +#define ADF_MIN_MSG_SIZE ADF_MSG_SIZE_32 +#define ADF_MAX_MSG_SIZE ADF_MSG_SIZE_128 + +/* Size to bytes conversion macros for ring and msg size values */ +#define ADF_MSG_SIZE_TO_BYTES(SIZE) (SIZE << 5) +#define ADF_BYTES_TO_MSG_SIZE(SIZE) (SIZE >> 5) +#define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7) +#define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7) + +/* Set the response quota to a high number */ +#define ADF_NO_RESPONSE_QUOTA 0xFFFFFFFF + +/* Minimum ring bufer size for memory allocation */ +#define ADF_RING_SIZE_BYTES_MIN(SIZE) \ + ((SIZE < ADF_SIZE_TO_RING_SIZE_IN_BYTES(ADF_RING_SIZE_4K)) ? \ + ADF_SIZE_TO_RING_SIZE_IN_BYTES(ADF_RING_SIZE_4K) : \ + SIZE) +#define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6) +#define ADF_SIZE_TO_POW(SIZE) \ + ((((SIZE & 0x4) >> 1) | ((SIZE & 0x4) >> 2) | SIZE) & ~0x4) +/* Max outstanding requests */ +#define ADF_MAX_INFLIGHTS(RING_SIZE, MSG_SIZE) \ + ((((1 << (RING_SIZE - 1)) << 3) >> ADF_SIZE_TO_POW(MSG_SIZE)) - 1) +#define BUILD_RING_CONFIG(size) \ + ((ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_FULL_WM) | \ + (ADF_RING_NEAR_WATERMARK_0 << ADF_RING_CONFIG_NEAR_EMPTY_WM) | size) +#define BUILD_RESP_RING_CONFIG(size, watermark_nf, watermark_ne) \ + ((watermark_nf << ADF_RING_CONFIG_NEAR_FULL_WM) | \ + (watermark_ne << ADF_RING_CONFIG_NEAR_EMPTY_WM) | size) +#define BUILD_RING_BASE_ADDR(addr, size) \ + ((addr >> 6) & (0xFFFFFFFFFFFFFFFFULL << size)) +#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_HEAD + \ + (ring << 2)) +#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_TAIL + \ + (ring << 2)) +#define READ_CSR_E_STAT(csr_base_addr, bank) \ + ADF_CSR_RD(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_E_STAT) +#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_CONFIG + \ + (ring << 2), \ + value) +#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \ + do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_RING_LBASE + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_RING_UBASE + (ring << 2), \ + u_base); \ + } while (0) +static inline uint64_t +read_base(struct resource *csr_base_addr, uint32_t bank, uint32_t ring) +{ + uint32_t l_base, u_base; + uint64_t addr; + + l_base = ADF_CSR_RD(csr_base_addr, + (ADF_RING_BUNDLE_SIZE * bank) + + ADF_RING_CSR_RING_LBASE + (ring << 2)); + u_base = ADF_CSR_RD(csr_base_addr, + (ADF_RING_BUNDLE_SIZE * bank) + + ADF_RING_CSR_RING_UBASE + (ring << 2)); + + addr = (uint64_t)l_base & 0x00000000FFFFFFFFULL; + addr |= (uint64_t)u_base << 32 & 0xFFFFFFFF00000000ULL; + + return addr; +} + +#define READ_CSR_RING_BASE(csr_base_addr, bank, ring) \ + read_base(csr_base_addr, bank, ring) +#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_HEAD + \ + (ring << 2), \ + value) +#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_RING_TAIL + \ + (ring << 2), \ + value) +#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * (bank)) + ADF_RING_CSR_INT_FLAG, \ + value) +#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \ + do { \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_SRCSEL, \ + ADF_BANK_INT_SRC_SEL_MASK_0); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_SRCSEL_2, \ + ADF_BANK_INT_SRC_SEL_MASK_X); \ + } while (0) +#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_INT_COL_EN, \ + value) +#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + ADF_RING_CSR_INT_COL_CTL, \ + ADF_RING_CSR_INT_COL_CTL_ENABLE | value) +#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE * bank) + \ + ADF_RING_CSR_INT_FLAG_AND_COL, \ + value) +#endif diff --git a/sys/dev/qat/include/common/adf_transport_internal.h b/sys/dev/qat/include/common/adf_transport_internal.h new file mode 100644 index 000000000000..88b99ea44cc4 --- /dev/null +++ b/sys/dev/qat/include/common/adf_transport_internal.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef ADF_TRANSPORT_INTRN_H +#define ADF_TRANSPORT_INTRN_H + +#include "adf_transport.h" + +struct adf_etr_ring_debug_entry { + char ring_name[ADF_CFG_MAX_KEY_LEN_IN_BYTES]; + struct sysctl_oid *debug; +}; + +struct adf_etr_ring_data { + void *base_addr; + atomic_t *inflights; + struct mtx lock; /* protects ring data struct */ + adf_callback_fn callback; + struct adf_etr_bank_data *bank; + bus_addr_t dma_addr; + uint16_t head; + uint16_t tail; + uint8_t ring_number; + uint8_t ring_size; + uint8_t msg_size; + uint8_t reserved; + struct adf_etr_ring_debug_entry *ring_debug; + struct bus_dmamem dma_mem; + u32 csr_tail_offset; + u32 max_inflights; +}; + +struct adf_etr_bank_data { + struct adf_etr_ring_data *rings; + struct task resp_handler; + struct resource *csr_addr; + struct adf_accel_dev *accel_dev; + uint32_t irq_coalesc_timer; + uint16_t ring_mask; + uint16_t irq_mask; + struct mtx lock; /* protects bank data struct */ + struct sysctl_oid *bank_debug_dir; + struct sysctl_oid *bank_debug_cfg; + uint32_t bank_number; +}; + +struct adf_etr_data { + struct adf_etr_bank_data *banks; + struct sysctl_oid *debug; +}; + +void adf_response_handler(uintptr_t bank_addr); +int adf_handle_response(struct adf_etr_ring_data *ring, u32 quota); +int adf_bank_debugfs_add(struct adf_etr_bank_data *bank); +void adf_bank_debugfs_rm(struct adf_etr_bank_data *bank); +int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name); +void adf_ring_debugfs_rm(struct adf_etr_ring_data *ring); +#endif diff --git a/sys/dev/qat/include/common/icp_qat_fw_loader_handle.h b/sys/dev/qat/include/common/icp_qat_fw_loader_handle.h new file mode 100644 index 000000000000..a8afb5a4b377 --- /dev/null +++ b/sys/dev/qat/include/common/icp_qat_fw_loader_handle.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_FW_LOADER_HANDLE_H__ +#define __ICP_QAT_FW_LOADER_HANDLE_H__ +#include "icp_qat_uclo.h" + +struct icp_qat_fw_loader_ae_data { + unsigned int state; + unsigned int ustore_size; + unsigned int free_addr; + unsigned int free_size; + unsigned int live_ctx_mask; +}; + +struct icp_qat_fw_loader_hal_handle { + struct icp_qat_fw_loader_ae_data aes[ICP_QAT_UCLO_MAX_AE]; + unsigned int ae_mask; + unsigned int slice_mask; + unsigned int revision_id; + unsigned int ae_max_num; + unsigned int upc_mask; + unsigned int max_ustore; +}; + +struct icp_qat_fw_loader_handle { + struct icp_qat_fw_loader_hal_handle *hal_handle; + struct adf_accel_dev *accel_dev; + device_t pci_dev; + void *obj_handle; + void *sobj_handle; + void *mobj_handle; + bool fw_auth; + unsigned int cfg_ae_mask; + rman_res_t hal_sram_size; + struct resource *hal_sram_addr_v; + unsigned int hal_sram_offset; + struct resource *hal_misc_addr_v; + uintptr_t hal_cap_g_ctl_csr_addr_v; + uintptr_t hal_cap_ae_xfer_csr_addr_v; + uintptr_t hal_cap_ae_local_csr_addr_v; + uintptr_t hal_ep_csr_addr_v; +}; + +struct icp_firml_dram_desc { + struct bus_dmamem dram_mem; + + struct resource *dram_base_addr; + void *dram_base_addr_v; + bus_addr_t dram_bus_addr; + u64 dram_size; +}; +#endif diff --git a/sys/dev/qat/include/common/icp_qat_hal.h b/sys/dev/qat/include/common/icp_qat_hal.h new file mode 100644 index 000000000000..3a7475f25333 --- /dev/null +++ b/sys/dev/qat/include/common/icp_qat_hal.h @@ -0,0 +1,196 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_HAL_H +#define __ICP_QAT_HAL_H +#include "adf_accel_devices.h" +#include "icp_qat_fw_loader_handle.h" + +enum hal_global_csr { + MISC_CONTROL = 0x04, + ICP_RESET = 0x0c, + ICP_GLOBAL_CLK_ENABLE = 0x50 +}; + +enum { MISC_CONTROL_C4XXX = 0xAA0, + ICP_RESET_CPP0 = 0x938, + ICP_RESET_CPP1 = 0x93c, + ICP_GLOBAL_CLK_ENABLE_CPP0 = 0x964, + ICP_GLOBAL_CLK_ENABLE_CPP1 = 0x968 }; + +enum hal_ae_csr { + USTORE_ADDRESS = 0x000, + USTORE_DATA_LOWER = 0x004, + USTORE_DATA_UPPER = 0x008, + ALU_OUT = 0x010, + CTX_ARB_CNTL = 0x014, + CTX_ENABLES = 0x018, + CC_ENABLE = 0x01c, + CSR_CTX_POINTER = 0x020, + CTX_STS_INDIRECT = 0x040, + ACTIVE_CTX_STATUS = 0x044, + CTX_SIG_EVENTS_INDIRECT = 0x048, + CTX_SIG_EVENTS_ACTIVE = 0x04c, + CTX_WAKEUP_EVENTS_INDIRECT = 0x050, + LM_ADDR_0_INDIRECT = 0x060, + LM_ADDR_1_INDIRECT = 0x068, + LM_ADDR_2_INDIRECT = 0x0cc, + LM_ADDR_3_INDIRECT = 0x0d4, + INDIRECT_LM_ADDR_0_BYTE_INDEX = 0x0e0, + INDIRECT_LM_ADDR_1_BYTE_INDEX = 0x0e8, + INDIRECT_LM_ADDR_2_BYTE_INDEX = 0x10c, + INDIRECT_LM_ADDR_3_BYTE_INDEX = 0x114, + INDIRECT_T_INDEX = 0x0f8, + INDIRECT_T_INDEX_BYTE_INDEX = 0x0fc, + FUTURE_COUNT_SIGNAL_INDIRECT = 0x078, + TIMESTAMP_LOW = 0x0c0, + TIMESTAMP_HIGH = 0x0c4, + PROFILE_COUNT = 0x144, + SIGNATURE_ENABLE = 0x150, + AE_MISC_CONTROL = 0x160, + LOCAL_CSR_STATUS = 0x180, +}; + +enum fcu_csr { + FCU_CONTROL = 0x0, + FCU_STATUS = 0x4, + FCU_DRAM_ADDR_LO = 0xc, + FCU_DRAM_ADDR_HI = 0x10, + FCU_RAMBASE_ADDR_HI = 0x14, + FCU_RAMBASE_ADDR_LO = 0x18 +}; + +enum fcu_csr_c4xxx { + FCU_CONTROL_C4XXX = 0x0, + FCU_STATUS_C4XXX = 0x4, + FCU_STATUS1_C4XXX = 0xc, + FCU_AE_LOADED_C4XXX = 0x10, + FCU_DRAM_ADDR_LO_C4XXX = 0x14, + FCU_DRAM_ADDR_HI_C4XXX = 0x18, +}; + +enum fcu_cmd { + FCU_CTRL_CMD_NOOP = 0, + FCU_CTRL_CMD_AUTH = 1, + FCU_CTRL_CMD_LOAD = 2, + FCU_CTRL_CMD_START = 3 +}; + +enum fcu_sts { + FCU_STS_NO_STS = 0, + FCU_STS_VERI_DONE = 1, + FCU_STS_LOAD_DONE = 2, + FCU_STS_VERI_FAIL = 3, + FCU_STS_LOAD_FAIL = 4, + FCU_STS_BUSY = 5 +}; +#define UA_ECS (0x1 << 31) +#define ACS_ABO_BITPOS 31 +#define ACS_ACNO 0x7 +#define CE_ENABLE_BITPOS 0x8 +#define CE_LMADDR_0_GLOBAL_BITPOS 16 +#define CE_LMADDR_1_GLOBAL_BITPOS 17 +#define CE_LMADDR_2_GLOBAL_BITPOS 22 +#define CE_LMADDR_3_GLOBAL_BITPOS 23 +#define CE_T_INDEX_GLOBAL_BITPOS 21 +#define CE_NN_MODE_BITPOS 20 +#define CE_REG_PAR_ERR_BITPOS 25 +#define CE_BREAKPOINT_BITPOS 27 +#define CE_CNTL_STORE_PARITY_ERROR_BITPOS 29 +#define CE_INUSE_CONTEXTS_BITPOS 31 +#define CE_NN_MODE (0x1 << CE_NN_MODE_BITPOS) +#define CE_INUSE_CONTEXTS (0x1 << CE_INUSE_CONTEXTS_BITPOS) +#define XCWE_VOLUNTARY (0x1) +#define LCS_STATUS (0x1) +#define MMC_SHARE_CS_BITPOS 2 +#define GLOBAL_CSR 0xA00 +#define FCU_CTRL_AE_POS 0x8 +#define FCU_AUTH_STS_MASK 0x7 +#define FCU_STS_DONE_POS 0x9 +#define FCU_STS_AUTHFWLD_POS 0X8 +#define FCU_LOADED_AE_POS 0x16 +#define FW_AUTH_WAIT_PERIOD 10 +#define FW_AUTH_MAX_RETRY 300 +#define FCU_OFFSET 0x8c0 +#define FCU_OFFSET_C4XXX 0x1000 +#define MAX_CPP_NUM 2 +#define AE_CPP_NUM 2 +#define AES_PER_CPP 16 +#define SLICES_PER_CPP 6 +#define ICP_QAT_AE_OFFSET 0x20000 +#define ICP_QAT_AE_OFFSET_C4XXX 0x40000 +#define ICP_QAT_CAP_OFFSET (ICP_QAT_AE_OFFSET + 0x10000) +#define ICP_QAT_CAP_OFFSET_C4XXX 0x70000 +#define LOCAL_TO_XFER_REG_OFFSET 0x800 +#define ICP_QAT_EP_OFFSET 0x3a000 +#define ICP_QAT_EP_OFFSET_C4XXX 0x60000 +#define MEM_CFG_ERR_BIT 0x20 + +#define CAP_CSR_ADDR(csr) (csr + handle->hal_cap_g_ctl_csr_addr_v) +#define SET_CAP_CSR(handle, csr, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, CAP_CSR_ADDR(csr), val) +#define GET_CAP_CSR(handle, csr) \ + ADF_CSR_RD(handle->hal_misc_addr_v, CAP_CSR_ADDR(csr)) +#define SET_GLB_CSR(handle, csr, val) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + typeof(val) val_ = (val); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + SET_CAP_CSR(handle_, (csr_), (val_)) : \ + SET_CAP_CSR(handle_, csr_ + GLOBAL_CSR, val_); \ + }) +#define GET_GLB_CSR(handle, csr) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + (GET_CAP_CSR(handle_, (csr_))) : \ + (GET_CAP_CSR(handle_, (GLOBAL_CSR + (csr_)))); \ + }) +#define SET_FCU_CSR(handle, csr, val) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + typeof(val) val_ = (val); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + SET_CAP_CSR(handle_, \ + ((csr_) + FCU_OFFSET_C4XXX), \ + (val_)) : \ + SET_CAP_CSR(handle_, ((csr_) + FCU_OFFSET), (val_)); \ + }) +#define GET_FCU_CSR(handle, csr) \ + ({ \ + typeof(handle) handle_ = (handle); \ + typeof(csr) csr_ = (csr); \ + (IS_QAT_GEN3(pci_get_device(GET_DEV(handle_->accel_dev)))) ? \ + GET_CAP_CSR(handle_, (FCU_OFFSET_C4XXX + (csr_))) : \ + GET_CAP_CSR(handle_, (FCU_OFFSET + (csr_))); \ + }) +#define AE_CSR(handle, ae) \ + ((handle)->hal_cap_ae_local_csr_addr_v + ((ae) << 12)) +#define AE_CSR_ADDR(handle, ae, csr) (AE_CSR(handle, ae) + (0x3ff & (csr))) +#define SET_AE_CSR(handle, ae, csr, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, AE_CSR_ADDR(handle, ae, csr), val) +#define GET_AE_CSR(handle, ae, csr) \ + ADF_CSR_RD(handle->hal_misc_addr_v, AE_CSR_ADDR(handle, ae, csr)) +#define AE_XFER(handle, ae) \ + ((handle)->hal_cap_ae_xfer_csr_addr_v + ((ae) << 12)) +#define AE_XFER_ADDR(handle, ae, reg) \ + (AE_XFER(handle, ae) + (((reg)&0xff) << 2)) +#define SET_AE_XFER(handle, ae, reg, val) \ + ADF_CSR_WR(handle->hal_misc_addr_v, AE_XFER_ADDR(handle, ae, reg), val) +#define SRAM_WRITE(handle, addr, val) \ + ADF_CSR_WR((handle)->hal_sram_addr_v, addr, val) +#define GET_CSR_OFFSET(device_id, cap_offset_, ae_offset_, ep_offset_) \ + ({ \ + int gen3 = IS_QAT_GEN3(device_id); \ + cap_offset_ = \ + (gen3 ? ICP_QAT_CAP_OFFSET_C4XXX : ICP_QAT_CAP_OFFSET); \ + ae_offset_ = \ + (gen3 ? ICP_QAT_AE_OFFSET_C4XXX : ICP_QAT_AE_OFFSET); \ + ep_offset_ = \ + (gen3 ? ICP_QAT_EP_OFFSET_C4XXX : ICP_QAT_EP_OFFSET); \ + }) + +#endif diff --git a/sys/dev/qat/include/common/icp_qat_uclo.h b/sys/dev/qat/include/common/icp_qat_uclo.h new file mode 100644 index 000000000000..21a1c2fc8ace --- /dev/null +++ b/sys/dev/qat/include/common/icp_qat_uclo.h @@ -0,0 +1,558 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef __ICP_QAT_UCLO_H__ +#define __ICP_QAT_UCLO_H__ + +#define ICP_QAT_AC_895XCC_DEV_TYPE 0x00400000 +#define ICP_QAT_AC_C62X_DEV_TYPE 0x01000000 +#define ICP_QAT_AC_C3XXX_DEV_TYPE 0x02000000 +#define ICP_QAT_AC_200XX_DEV_TYPE 0x02000000 +#define ICP_QAT_AC_C4XXX_DEV_TYPE 0x04000000 +#define ICP_QAT_UCLO_MAX_AE 32 +#define ICP_QAT_UCLO_MAX_CTX 8 +#define ICP_QAT_UCLO_MAX_CPPNUM 2 +#define ICP_QAT_UCLO_MAX_UIMAGE (ICP_QAT_UCLO_MAX_AE * ICP_QAT_UCLO_MAX_CTX) +#define ICP_QAT_UCLO_MAX_USTORE 0x4000 +#define ICP_QAT_UCLO_MAX_XFER_REG 128 +#define ICP_QAT_UCLO_MAX_GPR_REG 128 +#define ICP_QAT_UCLO_MAX_LMEM_REG 1024 +#define ICP_QAT_UCLO_AE_ALL_CTX 0xff +#define ICP_QAT_UOF_OBJID_LEN 8 +#define ICP_QAT_UOF_FID 0xc6c2 +#define ICP_QAT_UOF_MAJVER 0x4 +#define ICP_QAT_UOF_MINVER 0x11 +#define ICP_QAT_UOF_OBJS "UOF_OBJS" +#define ICP_QAT_UOF_STRT "UOF_STRT" +#define ICP_QAT_UOF_IMAG "UOF_IMAG" +#define ICP_QAT_UOF_IMEM "UOF_IMEM" +#define ICP_QAT_UOF_LOCAL_SCOPE 1 +#define ICP_QAT_UOF_INIT_EXPR 0 +#define ICP_QAT_UOF_INIT_REG 1 +#define ICP_QAT_UOF_INIT_REG_CTX 2 +#define ICP_QAT_UOF_INIT_EXPR_ENDIAN_SWAP 3 +#define ICP_QAT_SUOF_OBJ_ID_LEN 8 +#define ICP_QAT_SUOF_FID 0x53554f46 +#define ICP_QAT_SUOF_MAJVER 0x0 +#define ICP_QAT_SUOF_MINVER 0x1 +#define ICP_QAT_SUOF_OBJ_NAME_LEN 128 +#define ICP_QAT_MOF_OBJ_ID_LEN 8 +#define ICP_QAT_MOF_OBJ_CHUNKID_LEN 8 +#define ICP_QAT_MOF_FID 0x00666f6d +#define ICP_QAT_MOF_MAJVER 0x0 +#define ICP_QAT_MOF_MINVER 0x1 +#define ICP_QAT_MOF_SYM_OBJS "SYM_OBJS" +#define ICP_QAT_SUOF_OBJS "SUF_OBJS" +#define ICP_QAT_SUOF_IMAG "SUF_IMAG" +#define ICP_QAT_SIMG_AE_INIT_SEQ_LEN (50 * sizeof(unsigned long long)) +#define ICP_QAT_SIMG_AE_INSTS_LEN (0x4000 * sizeof(unsigned long long)) +#define ICP_QAT_CSS_FWSK_MODULUS_LEN 256 +#define ICP_QAT_CSS_FWSK_EXPONENT_LEN 4 +#define ICP_QAT_CSS_FWSK_PAD_LEN 252 +#define ICP_QAT_CSS_FWSK_PUB_LEN \ + (ICP_QAT_CSS_FWSK_MODULUS_LEN + ICP_QAT_CSS_FWSK_EXPONENT_LEN + \ + ICP_QAT_CSS_FWSK_PAD_LEN) +#define ICP_QAT_CSS_SIGNATURE_LEN 256 +#define ICP_QAT_CSS_AE_IMG_LEN \ + (sizeof(struct icp_qat_simg_ae_mode) + ICP_QAT_SIMG_AE_INIT_SEQ_LEN + \ + ICP_QAT_SIMG_AE_INSTS_LEN) +#define ICP_QAT_CSS_AE_SIMG_LEN \ + (sizeof(struct icp_qat_css_hdr) + ICP_QAT_CSS_FWSK_PUB_LEN + \ + ICP_QAT_CSS_SIGNATURE_LEN + ICP_QAT_CSS_AE_IMG_LEN) +#define ICP_QAT_AE_IMG_OFFSET \ + (sizeof(struct icp_qat_css_hdr) + ICP_QAT_CSS_FWSK_MODULUS_LEN + \ + ICP_QAT_CSS_FWSK_EXPONENT_LEN + ICP_QAT_CSS_SIGNATURE_LEN) +#define ICP_QAT_CSS_MAX_IMAGE_LEN 0x40000 + +#define ICP_QAT_CTX_MODE(ae_mode) ((ae_mode)&0xf) +#define ICP_QAT_NN_MODE(ae_mode) (((ae_mode) >> 0x4) & 0xf) +#define ICP_QAT_SHARED_USTORE_MODE(ae_mode) (((ae_mode) >> 0xb) & 0x1) +#define RELOADABLE_CTX_SHARED_MODE(ae_mode) (((ae_mode) >> 0xc) & 0x1) + +#define ICP_QAT_LOC_MEM0_MODE(ae_mode) (((ae_mode) >> 0x8) & 0x1) +#define ICP_QAT_LOC_MEM1_MODE(ae_mode) (((ae_mode) >> 0x9) & 0x1) +#define ICP_QAT_LOC_MEM2_MODE(ae_mode) (((ae_mode) >> 0x6) & 0x1) +#define ICP_QAT_LOC_MEM3_MODE(ae_mode) (((ae_mode) >> 0x7) & 0x1) +#define ICP_QAT_LOC_TINDEX_MODE(ae_mode) (((ae_mode) >> 0xe) & 0x1) + +enum icp_qat_uof_mem_region { + ICP_QAT_UOF_SRAM_REGION = 0x0, + ICP_QAT_UOF_LMEM_REGION = 0x3, + ICP_QAT_UOF_UMEM_REGION = 0x5 +}; + +enum icp_qat_uof_regtype { + ICP_NO_DEST = 0, + ICP_GPA_REL = 1, + ICP_GPA_ABS = 2, + ICP_GPB_REL = 3, + ICP_GPB_ABS = 4, + ICP_SR_REL = 5, + ICP_SR_RD_REL = 6, + ICP_SR_WR_REL = 7, + ICP_SR_ABS = 8, + ICP_SR_RD_ABS = 9, + ICP_SR_WR_ABS = 10, + ICP_DR_REL = 19, + ICP_DR_RD_REL = 20, + ICP_DR_WR_REL = 21, + ICP_DR_ABS = 22, + ICP_DR_RD_ABS = 23, + ICP_DR_WR_ABS = 24, + ICP_LMEM = 26, + ICP_LMEM0 = 27, + ICP_LMEM1 = 28, + ICP_NEIGH_REL = 31, + ICP_LMEM2 = 61, + ICP_LMEM3 = 62, +}; + +enum icp_qat_css_fwtype { CSS_AE_FIRMWARE = 0, CSS_MMP_FIRMWARE = 1 }; + +struct icp_qat_uclo_page { + struct icp_qat_uclo_encap_page *encap_page; + struct icp_qat_uclo_region *region; + unsigned int flags; +}; + +struct icp_qat_uclo_region { + struct icp_qat_uclo_page *loaded; + struct icp_qat_uclo_page *page; +}; + +struct icp_qat_uclo_aeslice { + struct icp_qat_uclo_region *region; + struct icp_qat_uclo_page *page; + struct icp_qat_uclo_page *cur_page[ICP_QAT_UCLO_MAX_CTX]; + struct icp_qat_uclo_encapme *encap_image; + unsigned int ctx_mask_assigned; + unsigned int new_uaddr[ICP_QAT_UCLO_MAX_CTX]; +}; + +struct icp_qat_uclo_aedata { + unsigned int slice_num; + unsigned int eff_ustore_size; + struct icp_qat_uclo_aeslice ae_slices[ICP_QAT_UCLO_MAX_CTX]; + unsigned int shareable_ustore; +}; + +struct icp_qat_uof_encap_obj { + char *beg_uof; + struct icp_qat_uof_objhdr *obj_hdr; + struct icp_qat_uof_chunkhdr *chunk_hdr; + struct icp_qat_uof_varmem_seg *var_mem_seg; +}; + +struct icp_qat_uclo_encap_uwblock { + unsigned int start_addr; + unsigned int words_num; + uint64_t micro_words; +}; + +struct icp_qat_uclo_encap_page { + unsigned int def_page; + unsigned int page_region; + unsigned int beg_addr_v; + unsigned int beg_addr_p; + unsigned int micro_words_num; + unsigned int uwblock_num; + struct icp_qat_uclo_encap_uwblock *uwblock; +}; + +struct icp_qat_uclo_encapme { + struct icp_qat_uof_image *img_ptr; + struct icp_qat_uclo_encap_page *page; + unsigned int ae_reg_num; + struct icp_qat_uof_ae_reg *ae_reg; + unsigned int init_regsym_num; + struct icp_qat_uof_init_regsym *init_regsym; + unsigned int sbreak_num; + struct icp_qat_uof_sbreak *sbreak; + unsigned int uwords_num; +}; + +struct icp_qat_uclo_init_mem_table { + unsigned int entry_num; + struct icp_qat_uof_initmem *init_mem; +}; + +struct icp_qat_uclo_objhdr { + char *file_buff; + unsigned int checksum; + unsigned int size; +}; + +struct icp_qat_uof_strtable { + unsigned int table_len; + unsigned int reserved; + uint64_t strings; +}; + +struct icp_qat_uclo_objhandle { + unsigned int prod_type; + unsigned int prod_rev; + struct icp_qat_uclo_objhdr *obj_hdr; + struct icp_qat_uof_encap_obj encap_uof_obj; + struct icp_qat_uof_strtable str_table; + struct icp_qat_uclo_encapme ae_uimage[ICP_QAT_UCLO_MAX_UIMAGE]; + struct icp_qat_uclo_aedata ae_data[ICP_QAT_UCLO_MAX_AE]; + struct icp_qat_uclo_init_mem_table init_mem_tab; + struct icp_qat_uof_batch_init *lm_init_tab[ICP_QAT_UCLO_MAX_AE]; + struct icp_qat_uof_batch_init *umem_init_tab[ICP_QAT_UCLO_MAX_AE]; + int uimage_num; + int uword_in_bytes; + int global_inited; + unsigned int ae_num; + unsigned int ustore_phy_size; + void *obj_buf; + uint64_t *uword_buf; +}; + +struct icp_qat_uof_uword_block { + unsigned int start_addr; + unsigned int words_num; + unsigned int uword_offset; + unsigned int reserved; +}; + +struct icp_qat_uof_filehdr { + unsigned short file_id; + unsigned short reserved1; + char min_ver; + char maj_ver; + unsigned short reserved2; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_uof_filechunkhdr { + char chunk_id[ICP_QAT_UOF_OBJID_LEN]; + unsigned int checksum; + unsigned int offset; + unsigned int size; +}; + +struct icp_qat_uof_objhdr { + unsigned int ac_dev_type; + unsigned short min_cpu_ver; + unsigned short max_cpu_ver; + short max_chunks; + short num_chunks; + unsigned int reserved1; + unsigned int reserved2; +}; + +struct icp_qat_uof_chunkhdr { + char chunk_id[ICP_QAT_UOF_OBJID_LEN]; + unsigned int offset; + unsigned int size; +}; + +struct icp_qat_uof_memvar_attr { + unsigned int offset_in_byte; + unsigned int value; +}; + +struct icp_qat_uof_initmem { + unsigned int sym_name; + char region; + char scope; + unsigned short reserved1; + unsigned int addr; + unsigned int num_in_bytes; + unsigned int val_attr_num; +}; + +struct icp_qat_uof_init_regsym { + unsigned int sym_name; + char init_type; + char value_type; + char reg_type; + unsigned char ctx; + unsigned int reg_addr; + unsigned int value; +}; + +struct icp_qat_uof_varmem_seg { + unsigned int sram_base; + unsigned int sram_size; + unsigned int sram_alignment; + unsigned int sdram_base; + unsigned int sdram_size; + unsigned int sdram_alignment; + unsigned int sdram1_base; + unsigned int sdram1_size; + unsigned int sdram1_alignment; + unsigned int scratch_base; + unsigned int scratch_size; + unsigned int scratch_alignment; +}; + +struct icp_qat_uof_gtid { + char tool_id[ICP_QAT_UOF_OBJID_LEN]; + int tool_ver; + unsigned int reserved1; + unsigned int reserved2; +}; + +struct icp_qat_uof_sbreak { + unsigned int page_num; + unsigned int virt_uaddr; + unsigned char sbreak_type; + unsigned char reg_type; + unsigned short reserved1; + unsigned int addr_offset; + unsigned int reg_addr; +}; + +struct icp_qat_uof_code_page { + unsigned int page_region; + unsigned int page_num; + unsigned char def_page; + unsigned char reserved2; + unsigned short reserved1; + unsigned int beg_addr_v; + unsigned int beg_addr_p; + unsigned int neigh_reg_tab_offset; + unsigned int uc_var_tab_offset; + unsigned int imp_var_tab_offset; + unsigned int imp_expr_tab_offset; + unsigned int code_area_offset; +}; + +struct icp_qat_uof_image { + unsigned int img_name; + unsigned int ae_assigned; + unsigned int ctx_assigned; + unsigned int ac_dev_type; + unsigned int entry_address; + unsigned int fill_pattern[2]; + unsigned int reloadable_size; + unsigned char sensitivity; + unsigned char reserved; + unsigned short ae_mode; + unsigned short max_ver; + unsigned short min_ver; + unsigned short image_attrib; + unsigned short reserved2; + unsigned short page_region_num; + unsigned short numpages; + unsigned int reg_tab_offset; + unsigned int init_reg_sym_tab; + unsigned int sbreak_tab; + unsigned int app_metadata; +}; + +struct icp_qat_uof_objtable { + unsigned int entry_num; +}; + +struct icp_qat_uof_ae_reg { + unsigned int name; + unsigned int vis_name; + unsigned short type; + unsigned short addr; + unsigned short access_mode; + unsigned char visible; + unsigned char reserved1; + unsigned short ref_count; + unsigned short reserved2; + unsigned int xo_id; +}; + +struct icp_qat_uof_code_area { + unsigned int micro_words_num; + unsigned int uword_block_tab; +}; + +struct icp_qat_uof_batch_init { + unsigned int ae; + unsigned int addr; + unsigned int *value; + unsigned int size; + struct icp_qat_uof_batch_init *next; +}; + +struct icp_qat_suof_img_hdr { + const char *simg_buf; + unsigned long simg_len; + const char *css_header; + const char *css_key; + const char *css_signature; + const char *css_simg; + unsigned long simg_size; + unsigned int ae_num; + unsigned int ae_mask; + unsigned int fw_type; + unsigned long simg_name; + unsigned long appmeta_data; +}; + +struct icp_qat_suof_img_tbl { + unsigned int num_simgs; + struct icp_qat_suof_img_hdr *simg_hdr; +}; + +struct icp_qat_suof_handle { + unsigned int file_id; + unsigned int check_sum; + char min_ver; + char maj_ver; + char fw_type; + const char *suof_buf; + unsigned int suof_size; + char *sym_str; + unsigned int sym_size; + struct icp_qat_suof_img_tbl img_table; +}; + +struct icp_qat_fw_auth_desc { + unsigned int img_len; + unsigned int ae_mask; + unsigned int css_hdr_high; + unsigned int css_hdr_low; + unsigned int img_high; + unsigned int img_low; + unsigned int signature_high; + unsigned int signature_low; + unsigned int fwsk_pub_high; + unsigned int fwsk_pub_low; + unsigned int img_ae_mode_data_high; + unsigned int img_ae_mode_data_low; + unsigned int img_ae_init_data_high; + unsigned int img_ae_init_data_low; + unsigned int img_ae_insts_high; + unsigned int img_ae_insts_low; +}; + +struct icp_qat_auth_chunk { + struct icp_qat_fw_auth_desc fw_auth_desc; + u64 chunk_size; + u64 chunk_bus_addr; +}; + +struct icp_qat_css_hdr { + unsigned int module_type; + unsigned int header_len; + unsigned int header_ver; + unsigned int module_id; + unsigned int module_vendor; + unsigned int date; + unsigned int size; + unsigned int key_size; + unsigned int module_size; + unsigned int exponent_size; + unsigned int fw_type; + unsigned int reserved[21]; +}; + +struct icp_qat_simg_ae_mode { + unsigned int file_id; + unsigned short maj_ver; + unsigned short min_ver; + unsigned int dev_type; + unsigned short devmax_ver; + unsigned short devmin_ver; + unsigned int ae_mask; + unsigned int ctx_enables; + char fw_type; + char ctx_mode; + char nn_mode; + char lm0_mode; + char lm1_mode; + char scs_mode; + char lm2_mode; + char lm3_mode; + char tindex_mode; + unsigned char reserved[7]; + char simg_name[256]; + char appmeta_data[256]; +}; + +struct icp_qat_suof_filehdr { + unsigned int file_id; + unsigned int check_sum; + char min_ver; + char maj_ver; + char fw_type; + char reserved; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_suof_chunk_hdr { + char chunk_id[ICP_QAT_SUOF_OBJ_ID_LEN]; + u64 offset; + u64 size; +}; + +struct icp_qat_suof_strtable { + unsigned int tab_length; + unsigned int strings; +}; + +struct icp_qat_suof_objhdr { + unsigned int img_length; + unsigned int reserved; +}; + +struct icp_qat_mof_file_hdr { + unsigned int file_id; + unsigned int checksum; + char min_ver; + char maj_ver; + unsigned short reserved; + unsigned short max_chunks; + unsigned short num_chunks; +}; + +struct icp_qat_mof_chunkhdr { + char chunk_id[ICP_QAT_MOF_OBJ_ID_LEN]; + u64 offset; + u64 size; +}; + +struct icp_qat_mof_str_table { + unsigned int tab_len; + unsigned int strings; +}; + +struct icp_qat_mof_obj_hdr { + unsigned short max_chunks; + unsigned short num_chunks; + unsigned int reserved; +}; + +struct icp_qat_mof_obj_chunkhdr { + char chunk_id[ICP_QAT_MOF_OBJ_CHUNKID_LEN]; + u64 offset; + u64 size; + unsigned int name; + unsigned int reserved; +}; + +struct icp_qat_mof_objhdr { + char *obj_name; + const char *obj_buf; + unsigned int obj_size; +}; + +struct icp_qat_mof_table { + unsigned int num_objs; + struct icp_qat_mof_objhdr *obj_hdr; +}; + +struct icp_qat_mof_handle { + unsigned int file_id; + unsigned int checksum; + char min_ver; + char maj_ver; + const char *mof_buf; + u32 mof_size; + char *sym_str; + unsigned int sym_size; + const char *uobjs_hdr; + const char *sobjs_hdr; + struct icp_qat_mof_table obj_table; +}; +#endif diff --git a/sys/dev/qat/include/common/qat_freebsd.h b/sys/dev/qat/include/common/qat_freebsd.h new file mode 100644 index 000000000000..0a9cfc0188ef --- /dev/null +++ b/sys/dev/qat/include/common/qat_freebsd.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef QAT_FREEBSD_H_ +#define QAT_FREEBSD_H_ + +#include <sys/param.h> +#include <sys/module.h> +#include <sys/bus.h> +#include <sys/param.h> +#include <sys/malloc.h> +#include <sys/firmware.h> +#include <sys/rman.h> +#include <sys/types.h> +#include <sys/ctype.h> +#include <sys/ioccom.h> +#include <sys/param.h> +#include <sys/lock.h> +#include <linux/device.h> +#include <linux/dma-mapping.h> +#include <linux/completion.h> +#include <linux/list.h> +#include <machine/bus.h> +#include <machine/bus_dma.h> +#include <sys/firmware.h> +#include <asm/uaccess.h> +#include <linux/math64.h> +#include <linux/spinlock.h> + +#define PCI_VENDOR_ID_INTEL 0x8086 + +#if !defined(__bool_true_false_are_defined) +#define __bool_true_false_are_defined 1 +#define false 0 +#define true 1 +#if __STDC_VERSION__ < 199901L && __GNUC__ < 3 && !defined(__INTEL_COMPILER) +typedef int _Bool; +#endif +typedef _Bool bool; +#endif /* !__bool_true_false_are_defined && !__cplusplus */ + +#if __STDC_VERSION__ < 199901L && __GNUC__ < 3 && !defined(__INTEL_COMPILER) +typedef int _Bool; +#endif + +#define pause_ms(wmesg, ms) pause_sbt(wmesg, (ms)*SBT_1MS, 0, C_HARDCLOCK) + +/* Function sets the MaxPayload size of a PCI device. */ +int pci_set_max_payload(device_t dev, int payload_size); + +device_t pci_find_pf(device_t vf); + +MALLOC_DECLARE(M_QAT); + +struct msix_entry { + struct resource *irq; + void *cookie; +}; + +struct pci_device_id { + uint16_t vendor; + uint16_t device; +}; + +struct bus_dmamem { + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; + void *dma_vaddr; + bus_addr_t dma_baddr; +}; + +/* + * Allocate a mapping. On success, zero is returned and the 'dma_vaddr' + * and 'dma_baddr' fields are populated with the virtual and bus addresses, + * respectively, of the mapping. + */ +int bus_dma_mem_create(struct bus_dmamem *mem, + bus_dma_tag_t parent, + bus_size_t alignment, + bus_addr_t lowaddr, + bus_size_t len, + int flags); + +/* + * Release a mapping created by bus_dma_mem_create(). + */ +void bus_dma_mem_free(struct bus_dmamem *mem); + +#define list_for_each_prev_safe(p, n, h) \ + for (p = (h)->prev, n = (p)->prev; p != (h); p = n, n = (p)->prev) + +static inline int +compat_strtoul(const char *cp, unsigned int base, unsigned long *res) +{ + char *end; + + *res = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return (-EINVAL); + return (0); +} + +static inline int +compat_strtouint(const char *cp, unsigned int base, unsigned int *res) +{ + char *end; + unsigned long temp; + + *res = temp = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return (-EINVAL); + if (temp != (unsigned int)temp) + return (-ERANGE); + return (0); +} + +static inline int +compat_strtou8(const char *cp, unsigned int base, unsigned char *res) +{ + char *end; + unsigned long temp; + + *res = temp = strtoul(cp, &end, base); + + /* skip newline character, if any */ + if (*end == '\n') + end++; + if (*cp == 0 || *end != 0) + return -EINVAL; + if (temp != (unsigned char)temp) + return -ERANGE; + return 0; +} + +#if __FreeBSD_version >= 1300500 +#undef dev_to_node +static inline int +dev_to_node(device_t dev) +{ + int numa_domain; + + if (!dev || bus_get_domain(dev, &numa_domain) != 0) + return (-1); + else + return (numa_domain); +} +#endif +#endif diff --git a/sys/dev/qat/include/common/sal_statistics_strings.h b/sys/dev/qat/include/common/sal_statistics_strings.h new file mode 100644 index 000000000000..aab88a0b374d --- /dev/null +++ b/sys/dev/qat/include/common/sal_statistics_strings.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef SAL_STATISTICS_STRINGS_H +#define SAL_STATISTICS_STRINGS_H + +/* + * Config values names for statistics + */ +#define SAL_STATS_CFG_ENABLED "statsGeneral" +/**< Config value name for enabling/disabling statistics */ +#define SAL_STATS_CFG_DC "statsDc" +/**< Config value name for enabling/disabling Compression statistics */ +#define SAL_STATS_CFG_DH "statsDh" +/**< Config value name for enabling/disabling Diffie-Helman statistics */ +#define SAL_STATS_CFG_DRBG "statsDrbg" +/**< Config value name for enabling/disabling DRBG statistics */ +#define SAL_STATS_CFG_DSA "statsDsa" +/**< Config value name for enabling/disabling DSA statistics */ +#define SAL_STATS_CFG_ECC "statsEcc" +/**< Config value name for enabling/disabling ECC statistics */ +#define SAL_STATS_CFG_KEYGEN "statsKeyGen" +/**< Config value name for enabling/disabling Key Gen statistics */ +#define SAL_STATS_CFG_LN "statsLn" +/**< Config value name for enabling/disabling Large Number statistics */ +#define SAL_STATS_CFG_PRIME "statsPrime" +/**< Config value name for enabling/disabling Prime statistics */ +#define SAL_STATS_CFG_RSA "statsRsa" +/**< Config value name for enabling/disabling RSA statistics */ +#define SAL_STATS_CFG_SYM "statsSym" +/**< Config value name for enabling/disabling Symmetric Crypto statistics */ + +#endif diff --git a/sys/dev/qat/include/icp_qat_fw.h b/sys/dev/qat/include/icp_qat_fw.h new file mode 100644 index 000000000000..fe470b45c286 --- /dev/null +++ b/sys/dev/qat/include/icp_qat_fw.h @@ -0,0 +1,292 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_FW_H_ +#define _ICP_QAT_FW_H_ +#include <sys/types.h> +#include "icp_qat_hw.h" + +#define QAT_FIELD_SET(flags, val, bitpos, mask) \ + { \ + (flags) = (((flags) & (~((mask) << (bitpos)))) | \ + (((val) & (mask)) << (bitpos))); \ + } + +#define QAT_FIELD_GET(flags, bitpos, mask) (((flags) >> (bitpos)) & (mask)) + +#define ICP_QAT_FW_REQ_DEFAULT_SZ 128 +#define ICP_QAT_FW_RESP_DEFAULT_SZ 32 +#define ICP_QAT_FW_COMN_ONE_BYTE_SHIFT 8 +#define ICP_QAT_FW_COMN_SINGLE_BYTE_MASK 0xFF +#define ICP_QAT_FW_NUM_LONGWORDS_1 1 +#define ICP_QAT_FW_NUM_LONGWORDS_2 2 +#define ICP_QAT_FW_NUM_LONGWORDS_3 3 +#define ICP_QAT_FW_NUM_LONGWORDS_4 4 +#define ICP_QAT_FW_NUM_LONGWORDS_5 5 +#define ICP_QAT_FW_NUM_LONGWORDS_6 6 +#define ICP_QAT_FW_NUM_LONGWORDS_7 7 +#define ICP_QAT_FW_NUM_LONGWORDS_10 10 +#define ICP_QAT_FW_NUM_LONGWORDS_13 13 +#define ICP_QAT_FW_NULL_REQ_SERV_ID 1 + +enum icp_qat_fw_comn_resp_serv_id { + ICP_QAT_FW_COMN_RESP_SERV_NULL, + ICP_QAT_FW_COMN_RESP_SERV_CPM_FW, + ICP_QAT_FW_COMN_RESP_SERV_DELIMITER +}; + +enum icp_qat_fw_comn_request_id { + ICP_QAT_FW_COMN_REQ_NULL = 0, + ICP_QAT_FW_COMN_REQ_CPM_FW_PKE = 3, + ICP_QAT_FW_COMN_REQ_CPM_FW_LA = 4, + ICP_QAT_FW_COMN_REQ_CPM_FW_DMA = 7, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP = 9, + ICP_QAT_FW_COMN_REQ_DELIMITER +}; + +struct icp_qat_fw_comn_req_hdr_cd_pars { + union { + struct { + uint64_t content_desc_addr; + uint16_t content_desc_resrvd1; + uint8_t content_desc_params_sz; + uint8_t content_desc_hdr_resrvd2; + uint32_t content_desc_resrvd3; + } s; + struct { + uint32_t serv_specif_fields[4]; + } s1; + } u; +}; + +struct icp_qat_fw_comn_req_mid { + uint64_t opaque_data; + uint64_t src_data_addr; + uint64_t dest_data_addr; + uint32_t src_length; + uint32_t dst_length; +}; + +struct icp_qat_fw_comn_req_cd_ctrl { + uint32_t content_desc_ctrl_lw[ICP_QAT_FW_NUM_LONGWORDS_5]; +}; + +struct icp_qat_fw_comn_req_hdr { + uint8_t resrvd1; + uint8_t service_cmd_id; + uint8_t service_type; + uint8_t hdr_flags; + uint16_t serv_specif_flags; + uint16_t comn_req_flags; +}; + +struct icp_qat_fw_comn_req_rqpars { + uint32_t serv_specif_rqpars_lw[ICP_QAT_FW_NUM_LONGWORDS_13]; +}; + +struct icp_qat_fw_comn_req { + struct icp_qat_fw_comn_req_hdr comn_hdr; + struct icp_qat_fw_comn_req_hdr_cd_pars cd_pars; + struct icp_qat_fw_comn_req_mid comn_mid; + struct icp_qat_fw_comn_req_rqpars serv_specif_rqpars; + struct icp_qat_fw_comn_req_cd_ctrl cd_ctrl; +}; + +struct icp_qat_fw_comn_error { + uint8_t xlat_err_code; + uint8_t cmp_err_code; +}; + +struct icp_qat_fw_comn_resp_hdr { + uint8_t resrvd1; + uint8_t service_id; + uint8_t response_type; + uint8_t hdr_flags; + struct icp_qat_fw_comn_error comn_error; + uint8_t comn_status; + uint8_t cmd_id; +}; + +struct icp_qat_fw_comn_resp { + struct icp_qat_fw_comn_resp_hdr comn_hdr; + uint64_t opaque_data; + uint32_t resrvd[ICP_QAT_FW_NUM_LONGWORDS_4]; +}; + +#define ICP_QAT_FW_COMN_REQ_FLAG_SET 1 +#define ICP_QAT_FW_COMN_REQ_FLAG_CLR 0 +#define ICP_QAT_FW_COMN_VALID_FLAG_BITPOS 7 +#define ICP_QAT_FW_COMN_VALID_FLAG_MASK 0x1 +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK 0x7F + +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_type + +#define ICP_QAT_FW_COMN_OV_SRV_TYPE_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_type = val + +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_GET(icp_qat_fw_comn_req_hdr_t) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id + +#define ICP_QAT_FW_COMN_OV_SRV_CMD_ID_SET(icp_qat_fw_comn_req_hdr_t, val) \ + icp_qat_fw_comn_req_hdr_t.service_cmd_id = val + +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_GET(hdr_t) \ + ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_t.hdr_flags) + +#define ICP_QAT_FW_COMN_HDR_VALID_FLAG_SET(hdr_t, val) \ + ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) + +#define ICP_QAT_FW_COMN_VALID_FLAG_GET(hdr_flags) \ + QAT_FIELD_GET(hdr_flags, \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +#define ICP_QAT_FW_COMN_HDR_RESRVD_FLD_GET(hdr_flags) \ + (hdr_flags & ICP_QAT_FW_COMN_HDR_RESRVD_FLD_MASK) + +#define ICP_QAT_FW_COMN_VALID_FLAG_SET(hdr_t, val) \ + QAT_FIELD_SET((hdr_t.hdr_flags), \ + (val), \ + ICP_QAT_FW_COMN_VALID_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_VALID_FLAG_MASK) + +#define ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(valid) \ + (((valid)&ICP_QAT_FW_COMN_VALID_FLAG_MASK) \ + << ICP_QAT_FW_COMN_VALID_FLAG_BITPOS) + +#define QAT_COMN_PTR_TYPE_BITPOS 0 +#define QAT_COMN_PTR_TYPE_MASK 0x1 +#define QAT_COMN_CD_FLD_TYPE_BITPOS 1 +#define QAT_COMN_CD_FLD_TYPE_MASK 0x1 +#define QAT_COMN_PTR_TYPE_FLAT 0x0 +#define QAT_COMN_PTR_TYPE_SGL 0x1 +#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0 +#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1 + +#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \ + ((((cdt)&QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) | \ + (((ptr)&QAT_COMN_PTR_TYPE_MASK) << QAT_COMN_PTR_TYPE_BITPOS)) + +#define ICP_QAT_FW_COMN_PTR_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, QAT_COMN_PTR_TYPE_BITPOS, QAT_COMN_PTR_TYPE_MASK) + +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_GET(flags) \ + QAT_FIELD_GET(flags, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +#define ICP_QAT_FW_COMN_PTR_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_PTR_TYPE_BITPOS, \ + QAT_COMN_PTR_TYPE_MASK) + +#define ICP_QAT_FW_COMN_CD_FLD_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, \ + val, \ + QAT_COMN_CD_FLD_TYPE_BITPOS, \ + QAT_COMN_CD_FLD_TYPE_MASK) + +#define ICP_QAT_FW_COMN_NEXT_ID_BITPOS 4 +#define ICP_QAT_FW_COMN_NEXT_ID_MASK 0xF0 +#define ICP_QAT_FW_COMN_CURR_ID_BITPOS 0 +#define ICP_QAT_FW_COMN_CURR_ID_MASK 0x0F + +#define ICP_QAT_FW_COMN_NEXT_ID_GET(cd_ctrl_hdr_t) \ + ((((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_NEXT_ID_MASK) >> \ + (ICP_QAT_FW_COMN_NEXT_ID_BITPOS)) + +#define ICP_QAT_FW_COMN_NEXT_ID_SET(cd_ctrl_hdr_t, val) \ + { \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_CURR_ID_MASK) | \ + ((val << ICP_QAT_FW_COMN_NEXT_ID_BITPOS) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK)); \ + } + +#define ICP_QAT_FW_COMN_CURR_ID_GET(cd_ctrl_hdr_t) \ + (((cd_ctrl_hdr_t)->next_curr_id) & ICP_QAT_FW_COMN_CURR_ID_MASK) + +#define ICP_QAT_FW_COMN_CURR_ID_SET(cd_ctrl_hdr_t, val) \ + { \ + ((cd_ctrl_hdr_t)->next_curr_id) = \ + ((((cd_ctrl_hdr_t)->next_curr_id) & \ + ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ + ((val)&ICP_QAT_FW_COMN_CURR_ID_MASK)); \ + } + +#define QAT_COMN_RESP_CRYPTO_STATUS_BITPOS 7 +#define QAT_COMN_RESP_CRYPTO_STATUS_MASK 0x1 +#define QAT_COMN_RESP_PKE_STATUS_BITPOS 6 +#define QAT_COMN_RESP_PKE_STATUS_MASK 0x1 +#define QAT_COMN_RESP_CMP_STATUS_BITPOS 5 +#define QAT_COMN_RESP_CMP_STATUS_MASK 0x1 +#define QAT_COMN_RESP_XLAT_STATUS_BITPOS 4 +#define QAT_COMN_RESP_XLAT_STATUS_MASK 0x1 +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS 3 +#define QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK 0x1 + +#define ICP_QAT_FW_COMN_RESP_STATUS_BUILD(crypto, comp, xlat, eolb) \ + ((((crypto)&QAT_COMN_RESP_CRYPTO_STATUS_MASK) \ + << QAT_COMN_RESP_CRYPTO_STATUS_BITPOS) | \ + (((comp)&QAT_COMN_RESP_CMP_STATUS_MASK) \ + << QAT_COMN_RESP_CMP_STATUS_BITPOS) | \ + (((xlat)&QAT_COMN_RESP_XLAT_STATUS_MASK) \ + << QAT_COMN_RESP_XLAT_STATUS_BITPOS) | \ + (((eolb)&QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) \ + << QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS)) + +#define ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CRYPTO_STATUS_BITPOS, \ + QAT_COMN_RESP_CRYPTO_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_STATUS_BITPOS, \ + QAT_COMN_RESP_CMP_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_XLAT_STATUS_BITPOS, \ + QAT_COMN_RESP_XLAT_STATUS_MASK) + +#define ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET(status) \ + QAT_FIELD_GET(status, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_BITPOS, \ + QAT_COMN_RESP_CMP_END_OF_LAST_BLK_MASK) + +#define ICP_QAT_FW_COMN_STATUS_FLAG_OK 0 +#define ICP_QAT_FW_COMN_STATUS_FLAG_ERROR 1 +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_CLR 0 +#define ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET 1 +#define ERR_CODE_NO_ERROR 0 +#define ERR_CODE_INVALID_BLOCK_TYPE -1 +#define ERR_CODE_NO_MATCH_ONES_COMP -2 +#define ERR_CODE_TOO_MANY_LEN_OR_DIS -3 +#define ERR_CODE_INCOMPLETE_LEN -4 +#define ERR_CODE_RPT_LEN_NO_FIRST_LEN -5 +#define ERR_CODE_RPT_GT_SPEC_LEN -6 +#define ERR_CODE_INV_LIT_LEN_CODE_LEN -7 +#define ERR_CODE_INV_DIS_CODE_LEN -8 +#define ERR_CODE_INV_LIT_LEN_DIS_IN_BLK -9 +#define ERR_CODE_DIS_TOO_FAR_BACK -10 +#define ERR_CODE_OVERFLOW_ERROR -11 +#define ERR_CODE_SOFT_ERROR -12 +#define ERR_CODE_FATAL_ERROR -13 +#define ERR_CODE_SSM_ERROR -14 +#define ERR_CODE_ENDPOINT_ERROR -15 + +enum icp_qat_fw_slice { + ICP_QAT_FW_SLICE_NULL = 0, + ICP_QAT_FW_SLICE_CIPHER = 1, + ICP_QAT_FW_SLICE_AUTH = 2, + ICP_QAT_FW_SLICE_DRAM_RD = 3, + ICP_QAT_FW_SLICE_DRAM_WR = 4, + ICP_QAT_FW_SLICE_COMP = 5, + ICP_QAT_FW_SLICE_XLAT = 6, + ICP_QAT_FW_SLICE_DELIMITER +}; +#endif diff --git a/sys/dev/qat/include/icp_qat_fw_init_admin.h b/sys/dev/qat/include/icp_qat_fw_init_admin.h new file mode 100644 index 000000000000..6f88de144770 --- /dev/null +++ b/sys/dev/qat/include/icp_qat_fw_init_admin.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_FW_INIT_ADMIN_H_ +#define _ICP_QAT_FW_INIT_ADMIN_H_ + +#include "icp_qat_fw.h" + +enum icp_qat_fw_init_admin_cmd_id { + ICP_QAT_FW_INIT_ME = 0, + ICP_QAT_FW_TRNG_ENABLE = 1, + ICP_QAT_FW_TRNG_DISABLE = 2, + ICP_QAT_FW_CONSTANTS_CFG = 3, + ICP_QAT_FW_STATUS_GET = 4, + ICP_QAT_FW_COUNTERS_GET = 5, + ICP_QAT_FW_LOOPBACK = 6, + ICP_QAT_FW_HEARTBEAT_SYNC = 7, + ICP_QAT_FW_HEARTBEAT_GET = 8, + ICP_QAT_FW_COMP_CAPABILITY_GET = 9, + ICP_QAT_FW_CRYPTO_CAPABILITY_GET = 10, + ICP_QAT_FW_HEARTBEAT_TIMER_SET = 13, + ICP_QAT_FW_RL_SLA_CONFIG = 14, + ICP_QAT_FW_RL_INIT = 15, + ICP_QAT_FW_RL_DU_START = 16, + ICP_QAT_FW_RL_DU_STOP = 17, + ICP_QAT_FW_TIMER_GET = 19, + ICP_QAT_FW_CNV_STATS_GET = 20, + ICP_QAT_FW_PKE_REPLAY_STATS_GET = 21 +}; + +enum icp_qat_fw_init_admin_resp_status { + ICP_QAT_FW_INIT_RESP_STATUS_SUCCESS = 0, + ICP_QAT_FW_INIT_RESP_STATUS_FAIL = 1, + ICP_QAT_FW_INIT_RESP_STATUS_UNSUPPORTED = 4 +}; + +enum icp_qat_fw_cnv_error_type { + CNV_ERR_TYPE_NO_ERROR = 0, + CNV_ERR_TYPE_CHECKSUM_ERROR, + CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH_ERROR, + CNV_ERR_TYPE_DECOMPRESSION_ERROR, + CNV_ERR_TYPE_TRANSLATION_ERROR, + CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH_ERROR, + CNV_ERR_TYPE_UNKNOWN_ERROR +}; + +#define CNV_ERROR_TYPE_GET(latest_error) \ + ({ \ + __typeof__(latest_error) _lerror = latest_error; \ + (_lerror >> 12) > CNV_ERR_TYPE_UNKNOWN_ERROR ? \ + CNV_ERR_TYPE_UNKNOWN_ERROR : \ + (enum icp_qat_fw_cnv_error_type)(_lerror >> 12); \ + }) +#define CNV_ERROR_LENGTH_DELTA_GET(latest_error) \ + ({ \ + __typeof__(latest_error) _lerror = latest_error; \ + ((s16)((_lerror & 0x0FFF) | (_lerror & 0x0800 ? 0xF000 : 0))); \ + }) +#define CNV_ERROR_DECOMP_STATUS_GET(latest_error) ((s8)(latest_error & 0xFF)) + +struct icp_qat_fw_init_admin_req { + u16 init_cfg_sz; + u8 resrvd1; + u8 cmd_id; + u32 max_req_duration; + u64 opaque_data; + + union { + /* ICP_QAT_FW_INIT_ME */ + struct { + u64 resrvd2; + u16 ibuf_size_in_kb; + u16 resrvd3; + u32 resrvd4; + }; + /* ICP_QAT_FW_CONSTANTS_CFG */ + struct { + u64 init_cfg_ptr; + u64 resrvd5; + }; + /* ICP_QAT_FW_HEARTBEAT_TIMER_SET */ + struct { + u64 hb_cfg_ptr; + u32 heartbeat_ticks; + u32 resrvd6; + }; + /* ICP_QAT_FW_RL_SLA_CONFIG */ + struct { + u32 credit_per_sla; + u8 service_id; + u8 vf_id; + u8 resrvd7; + u8 resrvd8; + u32 resrvd9; + u32 resrvd10; + }; + /* ICP_QAT_FW_RL_INIT */ + struct { + u32 rl_period; + u8 config; + u8 resrvd11; + u8 num_me; + u8 resrvd12; + u8 pke_svc_arb_map; + u8 bulk_crypto_svc_arb_map; + u8 compression_svc_arb_map; + u8 resrvd13; + u32 resrvd14; + }; + /* ICP_QAT_FW_RL_DU_STOP */ + struct { + u64 cfg_ptr; + u32 resrvd15; + u32 resrvd16; + }; + }; +} __packed; + +struct icp_qat_fw_init_admin_resp { + u8 flags; + u8 resrvd1; + u8 status; + u8 cmd_id; + union { + u32 resrvd2; + u32 ras_event_count; + /* ICP_QAT_FW_STATUS_GET */ + struct { + u16 version_minor_num; + u16 version_major_num; + }; + /* ICP_QAT_FW_COMP_CAPABILITY_GET */ + u32 extended_features; + /* ICP_QAT_FW_CNV_STATS_GET */ + struct { + u16 error_count; + u16 latest_error; + }; + }; + u64 opaque_data; + union { + u32 resrvd3[4]; + /* ICP_QAT_FW_STATUS_GET */ + struct { + u32 version_patch_num; + u8 context_id; + u8 ae_id; + u16 resrvd4; + u64 resrvd5; + }; + /* ICP_QAT_FW_COMP_CAPABILITY_GET */ + struct { + u16 compression_algos; + u16 checksum_algos; + u32 deflate_capabilities; + u32 resrvd6; + u32 deprecated; + }; + /* ICP_QAT_FW_CRYPTO_CAPABILITY_GET */ + struct { + u32 cipher_algos; + u32 hash_algos; + u16 keygen_algos; + u16 other; + u16 public_key_algos; + u16 prime_algos; + }; + /* ICP_QAT_FW_RL_DU_STOP */ + struct { + u32 resrvd7; + u8 granularity; + u8 resrvd8; + u16 resrvd9; + u32 total_du_time; + u32 resrvd10; + }; + /* ICP_QAT_FW_TIMER_GET */ + struct { + u64 timestamp; + u64 resrvd11; + }; + /* ICP_QAT_FW_COUNTERS_GET */ + struct { + u64 req_rec_count; + u64 resp_sent_count; + }; + /* ICP_QAT_FW_PKE_REPLAY_STATS_GET */ + struct { + u32 successful_count; + u32 unsuccessful_count; + u64 resrvd12; + }; + }; +} __packed; + +enum icp_qat_fw_init_admin_init_flag { ICP_QAT_FW_INIT_FLAG_PKE_DISABLED = 0 }; + +struct icp_qat_fw_init_admin_hb_cnt { + u16 resp_heartbeat_cnt; + u16 req_heartbeat_cnt; +}; + +struct icp_qat_fw_init_admin_hb_stats { + struct icp_qat_fw_init_admin_hb_cnt stats[ADF_NUM_HB_CNT_PER_AE]; +}; + +#define ICP_QAT_FW_COMN_HEARTBEAT_OK 0 +#define ICP_QAT_FW_COMN_HEARTBEAT_BLOCKED 1 +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_BITPOS 0 +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_MASK 0x1 +#define ICP_QAT_FW_COMN_STATUS_RESRVD_FLD_MASK 0xFE +#define ICP_QAT_FW_COMN_HEARTBEAT_HDR_FLAG_GET(hdr_t) \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_GET(hdr_t.flags) + +#define ICP_QAT_FW_COMN_HEARTBEAT_HDR_FLAG_SET(hdr_t, val) \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_SET(hdr_t, val) + +#define ICP_QAT_FW_COMN_HEARTBEAT_FLAG_GET(flags) \ + QAT_FIELD_GET(flags, \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_BITPOS, \ + ICP_QAT_FW_COMN_HEARTBEAT_FLAG_MASK) +#endif diff --git a/sys/dev/qat/include/icp_qat_hw.h b/sys/dev/qat/include/icp_qat_hw.h new file mode 100644 index 000000000000..e98c8db06f61 --- /dev/null +++ b/sys/dev/qat/include/icp_qat_hw.h @@ -0,0 +1,326 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _ICP_QAT_HW_H_ +#define _ICP_QAT_HW_H_ + +enum icp_qat_hw_ae_id { + ICP_QAT_HW_AE_0 = 0, + ICP_QAT_HW_AE_1 = 1, + ICP_QAT_HW_AE_2 = 2, + ICP_QAT_HW_AE_3 = 3, + ICP_QAT_HW_AE_4 = 4, + ICP_QAT_HW_AE_5 = 5, + ICP_QAT_HW_AE_6 = 6, + ICP_QAT_HW_AE_7 = 7, + ICP_QAT_HW_AE_8 = 8, + ICP_QAT_HW_AE_9 = 9, + ICP_QAT_HW_AE_10 = 10, + ICP_QAT_HW_AE_11 = 11, + ICP_QAT_HW_AE_DELIMITER = 12 +}; + +enum icp_qat_hw_qat_id { + ICP_QAT_HW_QAT_0 = 0, + ICP_QAT_HW_QAT_1 = 1, + ICP_QAT_HW_QAT_2 = 2, + ICP_QAT_HW_QAT_3 = 3, + ICP_QAT_HW_QAT_4 = 4, + ICP_QAT_HW_QAT_5 = 5, + ICP_QAT_HW_QAT_DELIMITER = 6 +}; + +enum icp_qat_hw_auth_algo { + ICP_QAT_HW_AUTH_ALGO_NULL = 0, + ICP_QAT_HW_AUTH_ALGO_SHA1 = 1, + ICP_QAT_HW_AUTH_ALGO_MD5 = 2, + ICP_QAT_HW_AUTH_ALGO_SHA224 = 3, + ICP_QAT_HW_AUTH_ALGO_SHA256 = 4, + ICP_QAT_HW_AUTH_ALGO_SHA384 = 5, + ICP_QAT_HW_AUTH_ALGO_SHA512 = 6, + ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC = 7, + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC = 8, + ICP_QAT_HW_AUTH_ALGO_AES_F9 = 9, + ICP_QAT_HW_AUTH_ALGO_GALOIS_128 = 10, + ICP_QAT_HW_AUTH_ALGO_GALOIS_64 = 11, + ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 = 12, + ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 = 13, + ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 = 14, + ICP_QAT_HW_AUTH_RESERVED_1 = 15, + ICP_QAT_HW_AUTH_RESERVED_2 = 16, + ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17, + ICP_QAT_HW_AUTH_RESERVED_3 = 18, + ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19, + ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20 +}; + +enum icp_qat_hw_auth_mode { + ICP_QAT_HW_AUTH_MODE0 = 0, + ICP_QAT_HW_AUTH_MODE1 = 1, + ICP_QAT_HW_AUTH_MODE2 = 2, + ICP_QAT_HW_AUTH_MODE_DELIMITER = 3 +}; + +struct icp_qat_hw_auth_config { + uint32_t config; + uint32_t reserved; +}; +enum icp_qat_slice_mask { + ICP_ACCEL_MASK_CIPHER_SLICE = 0x01, + ICP_ACCEL_MASK_AUTH_SLICE = 0x02, + ICP_ACCEL_MASK_PKE_SLICE = 0x04, + ICP_ACCEL_MASK_COMPRESS_SLICE = 0x08, + ICP_ACCEL_MASK_DEPRECATED = 0x10, + ICP_ACCEL_MASK_EIA3_SLICE = 0x20, + ICP_ACCEL_MASK_SHA3_SLICE = 0x40, + ICP_ACCEL_MASK_CRYPTO0_SLICE = 0x80, + ICP_ACCEL_MASK_CRYPTO1_SLICE = 0x100, + ICP_ACCEL_MASK_CRYPTO2_SLICE = 0x200, + ICP_ACCEL_MASK_SM3_SLICE = 0x400, + ICP_ACCEL_MASK_SM4_SLICE = 0x800 +}; + +enum icp_qat_capabilities_mask { + ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC = BIT(0), + ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC = BIT(1), + ICP_ACCEL_CAPABILITIES_CIPHER = BIT(2), + ICP_ACCEL_CAPABILITIES_AUTHENTICATION = BIT(3), + ICP_ACCEL_CAPABILITIES_RESERVED_1 = BIT(4), + ICP_ACCEL_CAPABILITIES_COMPRESSION = BIT(5), + ICP_ACCEL_CAPABILITIES_DEPRECATED = BIT(6), + ICP_ACCEL_CAPABILITIES_RAND = BIT(7), + ICP_ACCEL_CAPABILITIES_ZUC = BIT(8), + ICP_ACCEL_CAPABILITIES_SHA3 = BIT(9), + ICP_ACCEL_CAPABILITIES_KPT = BIT(10), + ICP_ACCEL_CAPABILITIES_RL = BIT(11), + ICP_ACCEL_CAPABILITIES_HKDF = BIT(12), + ICP_ACCEL_CAPABILITIES_ECEDMONT = BIT(13), + ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN = BIT(14), + ICP_ACCEL_CAPABILITIES_SHA3_EXT = BIT(15), + ICP_ACCEL_CAPABILITIES_AESGCM_SPC = BIT(16), + ICP_ACCEL_CAPABILITIES_CHACHA_POLY = BIT(17), + ICP_ACCEL_CAPABILITIES_SM2 = BIT(18), + ICP_ACCEL_CAPABILITIES_SM3 = BIT(19), + ICP_ACCEL_CAPABILITIES_SM4 = BIT(20), + ICP_ACCEL_CAPABILITIES_INLINE = BIT(21), + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY = BIT(22), + ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64 = BIT(23), + ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION = BIT(24), + ICP_ACCEL_CAPABILITIES_LZ4S_COMPRESSION = BIT(25), + ICP_ACCEL_CAPABILITIES_AES_V2 = BIT(26), + ICP_ACCEL_CAPABILITIES_KPT2 = BIT(27), +}; + +enum icp_qat_extended_dc_capabilities_mask { + ICP_ACCEL_CAPABILITIES_ADVANCED_COMPRESSION = 0x101 +}; + +#define QAT_AUTH_MODE_BITPOS 4 +#define QAT_AUTH_MODE_MASK 0xF +#define QAT_AUTH_ALGO_BITPOS 0 +#define QAT_AUTH_ALGO_MASK 0xF +#define QAT_AUTH_CMP_BITPOS 8 +#define QAT_AUTH_HIGH_BIT 4 +#define QAT_AUTH_CMP_MASK 0x7F +#define QAT_AUTH_SHA3_PADDING_BITPOS 16 +#define QAT_AUTH_SHA3_PADDING_MASK 0x1 +#define QAT_AUTH_ALGO_SHA3_BITPOS 22 +#define QAT_AUTH_ALGO_SHA3_MASK 0x3 +#define ICP_QAT_HW_AUTH_CONFIG_BUILD(mode, algo, cmp_len) \ + (((mode & QAT_AUTH_MODE_MASK) << QAT_AUTH_MODE_BITPOS) | \ + ((algo & QAT_AUTH_ALGO_MASK) << QAT_AUTH_ALGO_BITPOS) | \ + (((algo >> 4) & QAT_AUTH_ALGO_SHA3_MASK) \ + << QAT_AUTH_ALGO_SHA3_BITPOS) | \ + (((((algo == ICP_QAT_HW_AUTH_ALGO_SHA3_256) || \ + (algo == ICP_QAT_HW_AUTH_ALGO_SHA3_512)) ? \ + 1 : \ + 0) & \ + QAT_AUTH_SHA3_PADDING_MASK) \ + << QAT_AUTH_SHA3_PADDING_BITPOS) | \ + ((cmp_len & QAT_AUTH_CMP_MASK) << QAT_AUTH_CMP_BITPOS)) + +struct icp_qat_hw_auth_counter { + __be32 counter; + uint32_t reserved; +}; + +#define QAT_AUTH_COUNT_MASK 0xFFFFFFFF +#define QAT_AUTH_COUNT_BITPOS 0 +#define ICP_QAT_HW_AUTH_COUNT_BUILD(val) \ + (((val)&QAT_AUTH_COUNT_MASK) << QAT_AUTH_COUNT_BITPOS) + +struct icp_qat_hw_auth_setup { + struct icp_qat_hw_auth_config auth_config; + struct icp_qat_hw_auth_counter auth_counter; +}; + +#define QAT_HW_DEFAULT_ALIGNMENT 8 +#define QAT_HW_ROUND_UP(val, n) (((val) + ((n)-1)) & (~(n - 1))) +#define ICP_QAT_HW_NULL_STATE1_SZ 32 +#define ICP_QAT_HW_MD5_STATE1_SZ 16 +#define ICP_QAT_HW_SHA1_STATE1_SZ 20 +#define ICP_QAT_HW_SHA224_STATE1_SZ 32 +#define ICP_QAT_HW_SHA256_STATE1_SZ 32 +#define ICP_QAT_HW_SHA3_256_STATE1_SZ 32 +#define ICP_QAT_HW_SHA384_STATE1_SZ 64 +#define ICP_QAT_HW_SHA512_STATE1_SZ 64 +#define ICP_QAT_HW_SHA3_512_STATE1_SZ 64 +#define ICP_QAT_HW_SHA3_224_STATE1_SZ 28 +#define ICP_QAT_HW_SHA3_384_STATE1_SZ 48 +#define ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ 16 +#define ICP_QAT_HW_AES_CBC_MAC_STATE1_SZ 16 +#define ICP_QAT_HW_AES_F9_STATE1_SZ 32 +#define ICP_QAT_HW_KASUMI_F9_STATE1_SZ 16 +#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16 +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8 +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8 +#define ICP_QAT_HW_NULL_STATE2_SZ 32 +#define ICP_QAT_HW_MD5_STATE2_SZ 16 +#define ICP_QAT_HW_SHA1_STATE2_SZ 20 +#define ICP_QAT_HW_SHA224_STATE2_SZ 32 +#define ICP_QAT_HW_SHA256_STATE2_SZ 32 +#define ICP_QAT_HW_SHA3_256_STATE2_SZ 0 +#define ICP_QAT_HW_SHA384_STATE2_SZ 64 +#define ICP_QAT_HW_SHA512_STATE2_SZ 64 +#define ICP_QAT_HW_SHA3_512_STATE2_SZ 0 +#define ICP_QAT_HW_SHA3_224_STATE2_SZ 0 +#define ICP_QAT_HW_SHA3_384_STATE2_SZ 0 +#define ICP_QAT_HW_AES_XCBC_MAC_KEY_SZ 16 +#define ICP_QAT_HW_AES_CBC_MAC_KEY_SZ 16 +#define ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ 16 +#define ICP_QAT_HW_F9_IK_SZ 16 +#define ICP_QAT_HW_F9_FK_SZ 16 +#define ICP_QAT_HW_KASUMI_F9_STATE2_SZ \ + (ICP_QAT_HW_F9_IK_SZ + ICP_QAT_HW_F9_FK_SZ) +#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ +#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24 +#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32 +#define ICP_QAT_HW_GALOIS_H_SZ 16 +#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8 +#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16 + +struct icp_qat_hw_auth_sha512 { + struct icp_qat_hw_auth_setup inner_setup; + uint8_t state1[ICP_QAT_HW_SHA512_STATE1_SZ]; + struct icp_qat_hw_auth_setup outer_setup; + uint8_t state2[ICP_QAT_HW_SHA512_STATE2_SZ]; +}; + +struct icp_qat_hw_auth_algo_blk { + struct icp_qat_hw_auth_sha512 sha; +}; + +#define ICP_QAT_HW_GALOIS_LEN_A_BITPOS 0 +#define ICP_QAT_HW_GALOIS_LEN_A_MASK 0xFFFFFFFF + +enum icp_qat_hw_cipher_algo { + ICP_QAT_HW_CIPHER_ALGO_NULL = 0, + ICP_QAT_HW_CIPHER_ALGO_DES = 1, + ICP_QAT_HW_CIPHER_ALGO_3DES = 2, + ICP_QAT_HW_CIPHER_ALGO_AES128 = 3, + ICP_QAT_HW_CIPHER_ALGO_AES192 = 4, + ICP_QAT_HW_CIPHER_ALGO_AES256 = 5, + ICP_QAT_HW_CIPHER_ALGO_ARC4 = 6, + ICP_QAT_HW_CIPHER_ALGO_KASUMI = 7, + ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 = 8, + ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9, + ICP_QAT_HW_CIPHER_ALGO_SM4 = 10, + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = 11, + ICP_QAT_HW_CIPHER_DELIMITER = 12 +}; + +enum icp_qat_hw_cipher_mode { + ICP_QAT_HW_CIPHER_ECB_MODE = 0, + ICP_QAT_HW_CIPHER_CBC_MODE = 1, + ICP_QAT_HW_CIPHER_CTR_MODE = 2, + ICP_QAT_HW_CIPHER_F8_MODE = 3, + ICP_QAT_HW_CIPHER_AEAD_MODE = 4, + ICP_QAT_HW_CIPHER_RESERVED_MODE = 5, + ICP_QAT_HW_CIPHER_XTS_MODE = 6, + ICP_QAT_HW_CIPHER_MODE_DELIMITER = 7 +}; + +struct icp_qat_hw_cipher_config { + uint32_t val; + uint32_t reserved; +}; + +enum icp_qat_hw_cipher_dir { + ICP_QAT_HW_CIPHER_ENCRYPT = 0, + ICP_QAT_HW_CIPHER_DECRYPT = 1, +}; + +enum icp_qat_hw_cipher_convert { + ICP_QAT_HW_CIPHER_NO_CONVERT = 0, + ICP_QAT_HW_CIPHER_KEY_CONVERT = 1, +}; + +#define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_MASK 0xF +#define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_MASK 0xF +#define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_MASK 0x1 +#define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_MASK 0x1 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF +#define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 +#define QAT_CIPHER_AEAD_AAD_LOWER_SHIFT 24 +#define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 +#define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 +#define ICP_QAT_HW_CIPHER_CONFIG_BUILD(mode, algo, convert, dir) \ + (((mode & QAT_CIPHER_MODE_MASK) << QAT_CIPHER_MODE_BITPOS) | \ + ((algo & QAT_CIPHER_ALGO_MASK) << QAT_CIPHER_ALGO_BITPOS) | \ + ((convert & QAT_CIPHER_CONVERT_MASK) << QAT_CIPHER_CONVERT_BITPOS) | \ + ((dir & QAT_CIPHER_DIR_MASK) << QAT_CIPHER_DIR_BITPOS)) +#define ICP_QAT_HW_DES_BLK_SZ 8 +#define ICP_QAT_HW_3DES_BLK_SZ 8 +#define ICP_QAT_HW_NULL_BLK_SZ 8 +#define ICP_QAT_HW_AES_BLK_SZ 16 +#define ICP_QAT_HW_KASUMI_BLK_SZ 8 +#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8 +#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8 +#define ICP_QAT_HW_NULL_KEY_SZ 256 +#define ICP_QAT_HW_DES_KEY_SZ 8 +#define ICP_QAT_HW_3DES_KEY_SZ 24 +#define ICP_QAT_HW_AES_128_KEY_SZ 16 +#define ICP_QAT_HW_AES_192_KEY_SZ 24 +#define ICP_QAT_HW_AES_256_KEY_SZ 32 +#define ICP_QAT_HW_AES_128_F8_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_192_F8_KEY_SZ \ + (ICP_QAT_HW_AES_192_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_F8_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_KASUMI_KEY_SZ 16 +#define ICP_QAT_HW_KASUMI_F8_KEY_SZ \ + (ICP_QAT_HW_KASUMI_KEY_SZ * QAT_CIPHER_MODE_F8_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_128_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_128_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_AES_256_XTS_KEY_SZ \ + (ICP_QAT_HW_AES_256_KEY_SZ * QAT_CIPHER_MODE_XTS_KEY_SZ_MULT) +#define ICP_QAT_HW_ARC4_KEY_SZ 256 +#define ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ 16 +#define ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ 16 +#define ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ 16 +#define ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ 16 +#define ICP_QAT_HW_MODE_F8_NUM_REG_TO_CLEAR 2 +#define INIT_SHRAM_CONSTANTS_TABLE_SZ 1024 + +struct icp_qat_hw_cipher_aes256_f8 { + struct icp_qat_hw_cipher_config cipher_config; + uint8_t key[ICP_QAT_HW_AES_256_F8_KEY_SZ]; +}; + +struct icp_qat_hw_cipher_algo_blk { + struct icp_qat_hw_cipher_aes256_f8 aes; +} __aligned(64); +#endif diff --git a/sys/dev/qat/include/qat_ocf_mem_pool.h b/sys/dev/qat/include/qat_ocf_mem_pool.h new file mode 100644 index 000000000000..d1a59835f4fe --- /dev/null +++ b/sys/dev/qat/include/qat_ocf_mem_pool.h @@ -0,0 +1,142 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _QAT_OCF_MEM_POOL_H_ +#define _QAT_OCF_MEM_POOL_H_ + +/* System headers */ +#include <sys/types.h> + +/* QAT specific headers */ +#include "cpa.h" +#include "cpa_cy_sym_dp.h" +#include "icp_qat_fw_la.h" + +#define QAT_OCF_MAX_LEN (64 * 1024) +#define QAT_OCF_MAX_FLATS (32) +#define QAT_OCF_MAX_DIGEST SHA512_DIGEST_LENGTH +#define QAT_OCF_MAX_SYMREQ (256) +#define QAT_OCF_MEM_POOL_SIZE ((QAT_OCF_MAX_SYMREQ * 2 + 1) * 2) +#define QAT_OCF_MAXLEN 64 * 1024 + +/* Dedicated structure due to flexible arrays not allowed to be + * allocated on stack */ +struct qat_ocf_buffer_list { + Cpa64U reserved0; + Cpa32U numBuffers; + Cpa32U reserved1; + CpaPhysFlatBuffer flatBuffers[QAT_OCF_MAX_FLATS]; +}; + +struct qat_ocf_dma_mem { + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; + bus_dma_segment_t dma_seg; + void *dma_vaddr; +} __aligned(64); + +struct qat_ocf_cookie { + /* Source SGLs */ + struct qat_ocf_buffer_list src_buffers; + /* Destination SGL */ + struct qat_ocf_buffer_list dst_buffers; + + /* Cache OP data */ + CpaCySymDpOpData pOpdata; + + /* IV max size taken from cryptdev */ + uint8_t qat_ocf_iv_buf[EALG_MAX_BLOCK_LEN]; + bus_addr_t qat_ocf_iv_buf_paddr; + uint8_t qat_ocf_digest[QAT_OCF_MAX_DIGEST]; + bus_addr_t qat_ocf_digest_paddr; + /* Used only in case of separated AAD and GCM, CCM and RC4 */ + uint8_t qat_ocf_gcm_aad[ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX]; + bus_addr_t qat_ocf_gcm_aad_paddr; + + /* Source SGLs */ + struct qat_ocf_dma_mem src_dma_mem; + bus_addr_t src_buffer_list_paddr; + + /* Destination SGL */ + struct qat_ocf_dma_mem dst_dma_mem; + bus_addr_t dst_buffer_list_paddr; + + /* AAD - used only if separated AAD is used by OCF and HW requires + * to have it at the beginning of source buffer */ + struct qat_ocf_dma_mem gcm_aad_dma_mem; + bus_addr_t gcm_aad_buffer_list_paddr; + CpaBoolean is_sep_aad_used; + + /* Cache OP data */ + bus_addr_t pOpData_paddr; + /* misc */ + struct cryptop *crp_op; + + /* This cookie tag and map */ + bus_dma_tag_t dma_tag; + bus_dmamap_t dma_map; +}; + +struct qat_ocf_session { + CpaCySymSessionCtx sessionCtx; + Cpa32U sessionCtxSize; + Cpa32U authLen; + Cpa32U aadLen; +}; + +struct qat_ocf_dsession { + struct qat_ocf_instance *qatInstance; + struct qat_ocf_session encSession; + struct qat_ocf_session decSession; +}; + +struct qat_ocf_load_cb_arg { + struct cryptop *crp_op; + struct qat_ocf_cookie *qat_cookie; + CpaCySymDpOpData *pOpData; + int error; +}; + +struct qat_ocf_instance { + CpaInstanceHandle cyInstHandle; + struct mtx cyInstMtx; + struct qat_ocf_dma_mem cookie_dmamem[QAT_OCF_MEM_POOL_SIZE]; + struct qat_ocf_cookie *cookie_pool[QAT_OCF_MEM_POOL_SIZE]; + struct qat_ocf_cookie *free_cookie[QAT_OCF_MEM_POOL_SIZE]; + int free_cookie_ptr; + struct mtx cookie_pool_mtx; + int32_t driver_id; +}; + +/* Init/deinit */ +CpaStatus qat_ocf_cookie_pool_init(struct qat_ocf_instance *instance, + device_t dev); +void qat_ocf_cookie_pool_deinit(struct qat_ocf_instance *instance); +/* Alloc/free */ +CpaStatus qat_ocf_cookie_alloc(struct qat_ocf_instance *instance, + struct qat_ocf_cookie **buffers_out); +void qat_ocf_cookie_free(struct qat_ocf_instance *instance, + struct qat_ocf_cookie *cookie); +/* Pre/post sync */ +CpaStatus qat_ocf_cookie_dma_pre_sync(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +CpaStatus qat_ocf_cookie_dma_post_sync(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +/* Bus DMA unload */ +CpaStatus qat_ocf_cookie_dma_unload(struct cryptop *crp, + CpaCySymDpOpData *pOpData); +/* Bus DMA load callbacks */ +void qat_ocf_crypto_load_buf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); +void qat_ocf_crypto_load_obuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); +void qat_ocf_crypto_load_aadbuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error); + +#endif /* _QAT_OCF_MEM_POOL_H_ */ diff --git a/sys/dev/qat/include/qat_ocf_utils.h b/sys/dev/qat/include/qat_ocf_utils.h new file mode 100644 index 000000000000..0cacd8f0a84f --- /dev/null +++ b/sys/dev/qat/include/qat_ocf_utils.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +#ifndef _QAT_OCF_UTILS_H_ +#define _QAT_OCF_UTILS_H_ +/* System headers */ +#include <sys/types.h> +#include <sys/mbuf.h> +#include <machine/bus_dma.h> + +/* Cryptodev headers */ +#include <opencrypto/cryptodev.h> +#include <crypto/sha2/sha512.h> + +/* QAT specific headers */ +#include "qat_ocf_mem_pool.h" +#include "cpa.h" +#include "cpa_cy_sym_dp.h" + +static inline CpaBoolean +is_gmac_exception(const struct crypto_session_params *csp) +{ + if (CSP_MODE_DIGEST == csp->csp_mode) + if (CRYPTO_AES_NIST_GMAC == csp->csp_auth_alg) + return CPA_TRUE; + + return CPA_FALSE; +} + +static inline CpaBoolean +is_sep_aad_supported(const struct crypto_session_params *csp) +{ + if (CPA_TRUE == is_gmac_exception(csp)) + return CPA_FALSE; + + if (CSP_MODE_AEAD == csp->csp_mode) + if (CRYPTO_AES_NIST_GCM_16 == csp->csp_cipher_alg || + CRYPTO_AES_NIST_GMAC == csp->csp_cipher_alg) + return CPA_TRUE; + + return CPA_FALSE; +} + +static inline CpaBoolean +is_use_sep_digest(const struct crypto_session_params *csp) +{ + /* Use separated digest for all digest/hash operations, + * including GMAC */ + if (CSP_MODE_DIGEST == csp->csp_mode || CSP_MODE_ETA == csp->csp_mode) + return CPA_TRUE; + + return CPA_FALSE; +} + +int qat_ocf_handle_session_update(struct qat_ocf_dsession *ocf_dsession, + struct cryptop *crp); + +CpaStatus qat_ocf_wait_for_session(CpaCySymSessionCtx sessionCtx, + Cpa32U timeoutMS); + +#endif /* _QAT_OCF_UTILS_H_ */ diff --git a/sys/dev/qat/qat/qat_ocf.c b/sys/dev/qat/qat/qat_ocf.c new file mode 100644 index 000000000000..2461f3134a77 --- /dev/null +++ b/sys/dev/qat/qat/qat_ocf.c @@ -0,0 +1,1228 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include <sys/param.h> +#include <sys/systm.h> +#include <sys/bus.h> +#include <sys/cpu.h> +#include <sys/kernel.h> +#include <sys/mbuf.h> +#include <sys/module.h> +#include <sys/mutex.h> + +/* Cryptodev headers */ +#include <opencrypto/cryptodev.h> +#include "cryptodev_if.h" + +/* QAT specific headers */ +#include "cpa.h" +#include "cpa_cy_im.h" +#include "cpa_cy_sym_dp.h" +#include "adf_accel_devices.h" +#include "adf_common_drv.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_hash_defs_lookup.h" + +/* To get only IRQ instances */ +#include "icp_accel_devices.h" +#include "icp_adf_accel_mgr.h" +#include "lac_sal_types.h" + +/* QAT OCF specific headers */ +#include "qat_ocf_mem_pool.h" +#include "qat_ocf_utils.h" + +#define QAT_OCF_MAX_INSTANCES (256) +#define QAT_OCF_SESSION_WAIT_TIMEOUT_MS (1000) + +MALLOC_DEFINE(M_QAT_OCF, "qat_ocf", "qat_ocf(4) memory allocations"); + +/* QAT OCF internal structures */ +struct qat_ocf_softc { + device_t sc_dev; + int32_t cryptodev_id; + struct qat_ocf_instance cyInstHandles[QAT_OCF_MAX_INSTANCES]; + int32_t numCyInstances; +}; + +/* Function definitions */ +static void qat_ocf_freesession(device_t dev, crypto_session_t cses); +static int qat_ocf_probesession(device_t dev, + const struct crypto_session_params *csp); +static int qat_ocf_newsession(device_t dev, + crypto_session_t cses, + const struct crypto_session_params *csp); +static int qat_ocf_attach(device_t dev); +static int qat_ocf_detach(device_t dev); + +static void +symDpCallback(CpaCySymDpOpData *pOpData, + CpaStatus result, + CpaBoolean verifyResult) +{ + struct qat_ocf_cookie *qat_cookie; + struct cryptop *crp; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_session *qat_session = NULL; + struct qat_ocf_instance *qat_instance = NULL; + CpaStatus status; + int rc = 0; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + if (!qat_cookie) + return; + + crp = qat_cookie->crp_op; + + qat_dsession = crypto_get_driver_session(crp->crp_session); + qat_instance = qat_dsession->qatInstance; + + status = qat_ocf_cookie_dma_post_sync(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + rc = EIO; + goto exit; + } + + status = qat_ocf_cookie_dma_unload(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + rc = EIO; + goto exit; + } + + /* Verify result */ + if (CPA_STATUS_SUCCESS != result) { + rc = EBADMSG; + goto exit; + } + + /* Verify digest by FW (GCM and CCM only) */ + if (CPA_TRUE != verifyResult) { + rc = EBADMSG; + goto exit; + } + + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) + qat_session = &qat_dsession->encSession; + else + qat_session = &qat_dsession->decSession; + + /* Copy back digest result if it's stored in separated buffer */ + if (pOpData->digestResult && qat_session->authLen > 0) { + if ((crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) != 0) { + char icv[QAT_OCF_MAX_DIGEST] = { 0 }; + crypto_copydata(crp, + crp->crp_digest_start, + qat_session->authLen, + icv); + if (timingsafe_bcmp(icv, + qat_cookie->qat_ocf_digest, + qat_session->authLen) != 0) { + rc = EBADMSG; + goto exit; + } + } else { + crypto_copyback(crp, + crp->crp_digest_start, + qat_session->authLen, + qat_cookie->qat_ocf_digest); + } + } + +exit: + qat_ocf_cookie_free(qat_instance, qat_cookie); + crp->crp_etype = rc; + crypto_done(crp); + + return; +} + +static inline CpaPhysicalAddr +qatVirtToPhys(void *virtAddr) +{ + return (CpaPhysicalAddr)vtophys(virtAddr); +} + +static int +qat_ocf_probesession(device_t dev, const struct crypto_session_params *csp) +{ + if ((csp->csp_flags & ~(CSP_F_SEPARATE_OUTPUT | CSP_F_SEPARATE_AAD)) != + 0) { + return EINVAL; + } + + switch (csp->csp_mode) { + case CSP_MODE_CIPHER: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + case CRYPTO_AES_ICM: + if (csp->csp_ivlen != AES_BLOCK_LEN) + return EINVAL; + break; + case CRYPTO_AES_XTS: + if (csp->csp_ivlen != AES_XTS_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_DIGEST: + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1: + case CRYPTO_SHA1_HMAC: + case CRYPTO_SHA2_256: + case CRYPTO_SHA2_256_HMAC: + case CRYPTO_SHA2_384: + case CRYPTO_SHA2_384_HMAC: + case CRYPTO_SHA2_512: + case CRYPTO_SHA2_512_HMAC: + break; + case CRYPTO_AES_NIST_GMAC: + if (csp->csp_ivlen != AES_GCM_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_AEAD: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_NIST_GCM_16: + if (csp->csp_ivlen != AES_GCM_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + case CSP_MODE_ETA: + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1_HMAC: + case CRYPTO_SHA2_256_HMAC: + case CRYPTO_SHA2_384_HMAC: + case CRYPTO_SHA2_512_HMAC: + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + case CRYPTO_AES_ICM: + if (csp->csp_ivlen != AES_BLOCK_LEN) + return EINVAL; + break; + case CRYPTO_AES_XTS: + if (csp->csp_ivlen != AES_XTS_IV_LEN) + return EINVAL; + break; + default: + return EINVAL; + } + break; + default: + return EINVAL; + } + break; + default: + return EINVAL; + } + + return CRYPTODEV_PROBE_HARDWARE; +} + +static CpaStatus +qat_ocf_session_init(device_t dev, + struct cryptop *crp, + struct qat_ocf_instance *qat_instance, + struct qat_ocf_session *qat_ssession) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + /* Crytpodev structures */ + crypto_session_t cses; + const struct crypto_session_params *csp; + /* DP API Session configuration */ + CpaCySymSessionSetupData sessionSetupData = { 0 }; + CpaCySymSessionCtx sessionCtx = NULL; + Cpa32U sessionCtxSize = 0; + + cses = crp->crp_session; + if (NULL == cses) { + device_printf(dev, "no crypto session in cryptodev request\n"); + return CPA_STATUS_FAIL; + } + + csp = crypto_get_params(cses); + if (NULL == csp) { + device_printf(dev, "no session in cryptodev session\n"); + return CPA_STATUS_FAIL; + } + + /* Common fields */ + sessionSetupData.sessionPriority = CPA_CY_PRIORITY_HIGH; + /* Cipher key */ + if (crp->crp_cipher_key) + sessionSetupData.cipherSetupData.pCipherKey = + crp->crp_cipher_key; + else + sessionSetupData.cipherSetupData.pCipherKey = + csp->csp_cipher_key; + sessionSetupData.cipherSetupData.cipherKeyLenInBytes = + csp->csp_cipher_klen; + + /* Auth key */ + if (crp->crp_auth_key) + sessionSetupData.hashSetupData.authModeSetupData.authKey = + crp->crp_auth_key; + else + sessionSetupData.hashSetupData.authModeSetupData.authKey = + csp->csp_auth_key; + sessionSetupData.hashSetupData.authModeSetupData.authKeyLenInBytes = + csp->csp_auth_klen; + + qat_ssession->aadLen = crp->crp_aad_length; + if (CPA_TRUE == is_sep_aad_supported(csp)) + sessionSetupData.hashSetupData.authModeSetupData.aadLenInBytes = + crp->crp_aad_length; + else + sessionSetupData.hashSetupData.authModeSetupData.aadLenInBytes = + 0; + + /* Just setup algorithm - regardless of mode */ + if (csp->csp_cipher_alg) { + sessionSetupData.symOperation = CPA_CY_SYM_OP_CIPHER; + + switch (csp->csp_cipher_alg) { + case CRYPTO_AES_CBC: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_CBC; + break; + case CRYPTO_AES_ICM: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_CTR; + break; + case CRYPTO_AES_XTS: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_XTS; + break; + case CRYPTO_AES_NIST_GCM_16: + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_GCM; + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GCM; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + default: + device_printf(dev, + "cipher_alg: %d not supported\n", + csp->csp_cipher_alg); + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + } + + if (csp->csp_auth_alg) { + switch (csp->csp_auth_alg) { + case CRYPTO_SHA1_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA1; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA1: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA1; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_256_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA256; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_256: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA256; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_224_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA224; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_224: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA224; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_384_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA384; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_384: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA384; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + + case CRYPTO_SHA2_512_HMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA512; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + break; + case CRYPTO_SHA2_512: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_SHA512; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_PLAIN; + break; + case CRYPTO_AES_NIST_GMAC: + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GMAC; + break; + default: + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + } /* csp->csp_auth_alg */ + + /* Setting digest-length if no cipher-only mode is set */ + if (csp->csp_mode != CSP_MODE_CIPHER) { + lac_sym_qat_hash_defs_t *pHashDefsInfo = NULL; + if (csp->csp_auth_mlen) { + sessionSetupData.hashSetupData.digestResultLenInBytes = + csp->csp_auth_mlen; + qat_ssession->authLen = csp->csp_auth_mlen; + } else { + LacSymQat_HashDefsLookupGet( + qat_instance->cyInstHandle, + sessionSetupData.hashSetupData.hashAlgorithm, + &pHashDefsInfo); + if (NULL == pHashDefsInfo) { + device_printf( + dev, + "unable to find corresponding hash data\n"); + status = CPA_STATUS_UNSUPPORTED; + goto fail; + } + sessionSetupData.hashSetupData.digestResultLenInBytes = + pHashDefsInfo->algInfo->digestLength; + qat_ssession->authLen = + pHashDefsInfo->algInfo->digestLength; + } + sessionSetupData.verifyDigest = CPA_FALSE; + } + + switch (csp->csp_mode) { + case CSP_MODE_AEAD: + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* Place the digest result in a buffer unrelated to srcBuffer */ + sessionSetupData.digestIsAppended = CPA_TRUE; + /* For GCM and CCM driver forces to verify digest on HW */ + sessionSetupData.verifyDigest = CPA_TRUE; + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + } + break; + case CSP_MODE_ETA: + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* Place the digest result in a buffer unrelated to srcBuffer */ + sessionSetupData.digestIsAppended = CPA_FALSE; + /* Due to FW limitation to verify only appended MACs */ + sessionSetupData.verifyDigest = CPA_FALSE; + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER; + } + break; + case CSP_MODE_CIPHER: + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_DECRYPT; + } + sessionSetupData.symOperation = CPA_CY_SYM_OP_CIPHER; + break; + case CSP_MODE_DIGEST: + sessionSetupData.symOperation = CPA_CY_SYM_OP_HASH; + if (csp->csp_auth_alg == CRYPTO_AES_NIST_GMAC) { + sessionSetupData.symOperation = + CPA_CY_SYM_OP_ALGORITHM_CHAINING; + /* GMAC is always encrypt */ + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.algChainOrder = + CPA_CY_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH; + sessionSetupData.cipherSetupData.cipherAlgorithm = + CPA_CY_SYM_CIPHER_AES_GCM; + sessionSetupData.hashSetupData.hashAlgorithm = + CPA_CY_SYM_HASH_AES_GMAC; + sessionSetupData.hashSetupData.hashMode = + CPA_CY_SYM_HASH_MODE_AUTH; + /* Same key for cipher and auth */ + sessionSetupData.cipherSetupData.pCipherKey = + csp->csp_auth_key; + sessionSetupData.cipherSetupData.cipherKeyLenInBytes = + csp->csp_auth_klen; + /* Generated GMAC stored in separated buffer */ + sessionSetupData.digestIsAppended = CPA_FALSE; + /* Digest verification not allowed in GMAC case */ + sessionSetupData.verifyDigest = CPA_FALSE; + /* No AAD allowed */ + sessionSetupData.hashSetupData.authModeSetupData + .aadLenInBytes = 0; + } else { + sessionSetupData.cipherSetupData.cipherDirection = + CPA_CY_SYM_CIPHER_DIRECTION_ENCRYPT; + sessionSetupData.symOperation = CPA_CY_SYM_OP_HASH; + sessionSetupData.digestIsAppended = CPA_FALSE; + } + break; + default: + device_printf(dev, + "%s: unhandled crypto algorithm %d, %d\n", + __func__, + csp->csp_cipher_alg, + csp->csp_auth_alg); + status = CPA_STATUS_FAIL; + goto fail; + } + + /* Extracting session size */ + status = cpaCySymSessionCtxGetSize(qat_instance->cyInstHandle, + &sessionSetupData, + &sessionCtxSize); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "unable to get session size\n"); + goto fail; + } + + /* Allocating contiguous memory for session */ + sessionCtx = contigmalloc(sessionCtxSize, + M_QAT_OCF, + M_NOWAIT, + 0, + ~1UL, + 1 << (bsrl(sessionCtxSize - 1) + 1), + 0); + if (NULL == sessionCtx) { + device_printf(dev, "unable to allocate memory for session\n"); + status = CPA_STATUS_RESOURCE; + goto fail; + } + + status = cpaCySymDpInitSession(qat_instance->cyInstHandle, + &sessionSetupData, + sessionCtx); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "session initialization failed\n"); + goto fail; + } + + /* NOTE: lets keep double session (both directions) approach to overcome + * lack of direction update in FBSD QAT. + */ + qat_ssession->sessionCtx = sessionCtx; + qat_ssession->sessionCtxSize = sessionCtxSize; + + return CPA_STATUS_SUCCESS; + +fail: + /* Release resources if any */ + if (sessionCtx) + contigfree(sessionCtx, sessionCtxSize, M_QAT_OCF); + + return status; +} + +static int +qat_ocf_newsession(device_t dev, + crypto_session_t cses, + const struct crypto_session_params *csp) +{ + /* Cryptodev QAT structures */ + struct qat_ocf_softc *qat_softc; + struct qat_ocf_dsession *qat_dsession; + struct qat_ocf_instance *qat_instance; + u_int cpu_id = PCPU_GET(cpuid); + + /* Create cryptodev session */ + qat_softc = device_get_softc(dev); + qat_instance = + &qat_softc->cyInstHandles[cpu_id % qat_softc->numCyInstances]; + qat_dsession = crypto_get_driver_session(cses); + if (NULL == qat_dsession) { + device_printf(dev, "Unable to create new session\n"); + return (EINVAL); + } + + /* Add only instance at this point remaining operations moved to + * lazy session init */ + qat_dsession->qatInstance = qat_instance; + + return 0; +} + +static CpaStatus +qat_ocf_remove_session(device_t dev, + CpaInstanceHandle cyInstHandle, + struct qat_ocf_session *qat_session) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + if (NULL == qat_session->sessionCtx) + return CPA_STATUS_SUCCESS; + + /* User callback is executed right before decrementing pending + * callback atomic counter. To avoid removing session rejection + * we have to wait a very short while for counter update + * after call back execution. */ + status = qat_ocf_wait_for_session(qat_session->sessionCtx, + QAT_OCF_SESSION_WAIT_TIMEOUT_MS); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "waiting for session un-busy failed\n"); + return CPA_STATUS_FAIL; + } + + status = cpaCySymDpRemoveSession(cyInstHandle, qat_session->sessionCtx); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "error while removing session\n"); + return CPA_STATUS_FAIL; + } + + explicit_bzero(qat_session->sessionCtx, qat_session->sessionCtxSize); + contigfree(qat_session->sessionCtx, + qat_session->sessionCtxSize, + M_QAT_OCF); + qat_session->sessionCtx = NULL; + qat_session->sessionCtxSize = 0; + + return CPA_STATUS_SUCCESS; +} + +static void +qat_ocf_freesession(device_t dev, crypto_session_t cses) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_instance *qat_instance = NULL; + + qat_dsession = crypto_get_driver_session(cses); + qat_instance = qat_dsession->qatInstance; + mtx_lock(&qat_instance->cyInstMtx); + status = qat_ocf_remove_session(dev, + qat_dsession->qatInstance->cyInstHandle, + &qat_dsession->encSession); + if (CPA_STATUS_SUCCESS != status) + device_printf(dev, "unable to remove encrypt session\n"); + status = qat_ocf_remove_session(dev, + qat_dsession->qatInstance->cyInstHandle, + &qat_dsession->decSession); + if (CPA_STATUS_SUCCESS != status) + device_printf(dev, "unable to remove decrypt session\n"); + mtx_unlock(&qat_instance->cyInstMtx); +} + +/* QAT GCM/CCM FW API are only algorithms which support separated AAD. */ +static CpaStatus +qat_ocf_load_aad_gcm(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaCySymDpOpData *pOpData; + + pOpData = &qat_cookie->pOpdata; + + if (NULL != crp->crp_aad) + memcpy(qat_cookie->qat_ocf_gcm_aad, + crp->crp_aad, + crp->crp_aad_length); + else + crypto_copydata(crp, + crp->crp_aad_start, + crp->crp_aad_length, + qat_cookie->qat_ocf_gcm_aad); + + pOpData->pAdditionalAuthData = qat_cookie->qat_ocf_gcm_aad; + pOpData->additionalAuthData = qat_cookie->qat_ocf_gcm_aad_paddr; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_load_aad(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + const struct crypto_session_params *csp; + CpaCySymDpOpData *pOpData; + struct qat_ocf_load_cb_arg args; + + pOpData = &qat_cookie->pOpdata; + pOpData->pAdditionalAuthData = NULL; + pOpData->additionalAuthData = 0UL; + + if (crp->crp_aad_length == 0) + return CPA_STATUS_SUCCESS; + + if (crp->crp_aad_length > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX) + return CPA_STATUS_FAIL; + + csp = crypto_get_params(crp->crp_session); + + /* Handle GCM/CCM case */ + if (CPA_TRUE == is_sep_aad_supported(csp)) + return qat_ocf_load_aad_gcm(crp, qat_cookie); + + if (NULL == crp->crp_aad) { + /* AAD already embedded in source buffer */ + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + pOpData->cryptoStartSrcOffsetInBytes = crp->crp_payload_start; + + pOpData->messageLenToHashInBytes = + crp->crp_aad_length + crp->crp_payload_length; + pOpData->hashStartSrcOffsetInBytes = crp->crp_aad_start; + + return CPA_STATUS_SUCCESS; + } + + /* Separated AAD not supported by QAT - lets place the content + * of ADD buffer at the very beginning of source SGL */ + args.crp_op = crp; + args.qat_cookie = qat_cookie; + args.pOpData = pOpData; + args.error = 0; + status = bus_dmamap_load(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + crp->crp_aad, + crp->crp_aad_length, + qat_ocf_crypto_load_aadbuf_cb, + &args, + BUS_DMA_NOWAIT); + qat_cookie->is_sep_aad_used = CPA_TRUE; + + /* Right after this step we have AAD placed in the first flat buffer + * in source SGL */ + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + pOpData->cryptoStartSrcOffsetInBytes = + crp->crp_aad_length + crp->crp_aad_start + crp->crp_payload_start; + + pOpData->messageLenToHashInBytes = + crp->crp_aad_length + crp->crp_payload_length; + pOpData->hashStartSrcOffsetInBytes = crp->crp_aad_start; + + return status; +} + +static CpaStatus +qat_ocf_load(struct cryptop *crp, struct qat_ocf_cookie *qat_cookie) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaCySymDpOpData *pOpData; + struct qat_ocf_load_cb_arg args; + /* cryptodev internals */ + const struct crypto_session_params *csp; + + pOpData = &qat_cookie->pOpdata; + + csp = crypto_get_params(crp->crp_session); + + /* Load IV buffer if present */ + if (csp->csp_ivlen > 0) { + memset(qat_cookie->qat_ocf_iv_buf, + 0, + sizeof(qat_cookie->qat_ocf_iv_buf)); + crypto_read_iv(crp, qat_cookie->qat_ocf_iv_buf); + pOpData->iv = qat_cookie->qat_ocf_iv_buf_paddr; + pOpData->pIv = qat_cookie->qat_ocf_iv_buf; + pOpData->ivLenInBytes = csp->csp_ivlen; + } + + /* GCM/CCM - load AAD to separated buffer + * AES+SHA - load AAD to first flat in SGL */ + status = qat_ocf_load_aad(crp, qat_cookie); + if (CPA_STATUS_SUCCESS != status) + goto fail; + + /* Load source buffer */ + args.crp_op = crp; + args.qat_cookie = qat_cookie; + args.pOpData = pOpData; + args.error = 0; + status = bus_dmamap_load_crp_buffer(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + &crp->crp_buf, + qat_ocf_crypto_load_buf_cb, + &args, + BUS_DMA_NOWAIT); + if (CPA_STATUS_SUCCESS != status) + goto fail; + pOpData->srcBuffer = qat_cookie->src_buffer_list_paddr; + pOpData->srcBufferLen = CPA_DP_BUFLIST; + + /* Load destination buffer */ + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + status = + bus_dmamap_load_crp_buffer(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + &crp->crp_obuf, + qat_ocf_crypto_load_obuf_cb, + &args, + BUS_DMA_NOWAIT); + if (CPA_STATUS_SUCCESS != status) + goto fail; + pOpData->dstBuffer = qat_cookie->dst_buffer_list_paddr; + pOpData->dstBufferLen = CPA_DP_BUFLIST; + } else { + pOpData->dstBuffer = pOpData->srcBuffer; + pOpData->dstBufferLen = pOpData->srcBufferLen; + } + + if (CPA_TRUE == is_use_sep_digest(csp)) + pOpData->digestResult = qat_cookie->qat_ocf_digest_paddr; + else + pOpData->digestResult = 0UL; + + /* GMAC - aka zero length buffer */ + if (CPA_TRUE == is_gmac_exception(csp)) + pOpData->messageLenToCipherInBytes = 0; + +fail: + return status; +} + +static int +qat_ocf_check_input(device_t dev, struct cryptop *crp) +{ + const struct crypto_session_params *csp; + csp = crypto_get_params(crp->crp_session); + + if (crypto_buffer_len(&crp->crp_buf) > QAT_OCF_MAX_LEN) + return E2BIG; + + if (CPA_TRUE == is_sep_aad_supported(csp) && + (crp->crp_aad_length > ICP_QAT_FW_CCM_GCM_AAD_SZ_MAX)) + return EBADMSG; + + return 0; +} + +static int +qat_ocf_process(device_t dev, struct cryptop *crp, int hint) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + int rc = 0; + struct qat_ocf_dsession *qat_dsession = NULL; + struct qat_ocf_session *qat_session = NULL; + struct qat_ocf_instance *qat_instance = NULL; + CpaCySymDpOpData *pOpData = NULL; + struct qat_ocf_cookie *qat_cookie = NULL; + CpaBoolean memLoaded = CPA_FALSE; + + rc = qat_ocf_check_input(dev, crp); + if (rc) + goto fail; + + qat_dsession = crypto_get_driver_session(crp->crp_session); + + if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) + qat_session = &qat_dsession->encSession; + else + qat_session = &qat_dsession->decSession; + qat_instance = qat_dsession->qatInstance; + + status = qat_ocf_cookie_alloc(qat_instance, &qat_cookie); + if (CPA_STATUS_SUCCESS != status) { + rc = EAGAIN; + goto fail; + } + + qat_cookie->crp_op = crp; + + /* Common request fields */ + pOpData = &qat_cookie->pOpdata; + pOpData->instanceHandle = qat_instance->cyInstHandle; + pOpData->sessionCtx = NULL; + + /* Cipher fields */ + pOpData->cryptoStartSrcOffsetInBytes = crp->crp_payload_start; + pOpData->messageLenToCipherInBytes = crp->crp_payload_length; + /* Digest fields - any exceptions from this basic rules are covered + * in qat_ocf_load */ + pOpData->hashStartSrcOffsetInBytes = crp->crp_payload_start; + pOpData->messageLenToHashInBytes = crp->crp_payload_length; + + status = qat_ocf_load(crp, qat_cookie); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, + "unable to load OCF buffers to QAT DMA " + "transaction\n"); + rc = EIO; + goto fail; + } + memLoaded = CPA_TRUE; + + status = qat_ocf_cookie_dma_pre_sync(crp, pOpData); + if (CPA_STATUS_SUCCESS != status) { + device_printf(dev, "unable to sync DMA buffers\n"); + rc = EIO; + goto fail; + } + + mtx_lock(&qat_instance->cyInstMtx); + /* Session initialization at the first request. It's done + * in such way to overcome missing QAT specific session data + * such like AAD length and limited possibility to update + * QAT session while handling traffic. + */ + if (NULL == qat_session->sessionCtx) { + status = + qat_ocf_session_init(dev, crp, qat_instance, qat_session); + if (CPA_STATUS_SUCCESS != status) { + mtx_unlock(&qat_instance->cyInstMtx); + device_printf(dev, "unable to init session\n"); + rc = EIO; + goto fail; + } + } else { + status = qat_ocf_handle_session_update(qat_dsession, crp); + if (CPA_STATUS_RESOURCE == status) { + mtx_unlock(&qat_instance->cyInstMtx); + rc = EAGAIN; + goto fail; + } else if (CPA_STATUS_SUCCESS != status) { + mtx_unlock(&qat_instance->cyInstMtx); + rc = EIO; + goto fail; + } + } + pOpData->sessionCtx = qat_session->sessionCtx; + status = cpaCySymDpEnqueueOp(pOpData, CPA_TRUE); + mtx_unlock(&qat_instance->cyInstMtx); + if (CPA_STATUS_SUCCESS != status) { + if (CPA_STATUS_RETRY == status) { + rc = EAGAIN; + goto fail; + } + device_printf(dev, + "unable to send request. Status: %d\n", + status); + rc = EIO; + goto fail; + } + + return 0; +fail: + if (qat_cookie) { + if (memLoaded) + qat_ocf_cookie_dma_unload(crp, pOpData); + qat_ocf_cookie_free(qat_instance, qat_cookie); + } + crp->crp_etype = rc; + crypto_done(crp); + + return 0; +} + +static void +qat_ocf_identify(driver_t *drv, device_t parent) +{ + if (device_find_child(parent, "qat_ocf", -1) == NULL && + BUS_ADD_CHILD(parent, 200, "qat_ocf", -1) == 0) + device_printf(parent, "qat_ocf: could not attach!"); +} + +static int +qat_ocf_probe(device_t dev) +{ + device_set_desc(dev, "QAT engine"); + return (BUS_PROBE_NOWILDCARD); +} + +static CpaStatus +qat_ocf_get_irq_instances(CpaInstanceHandle *cyInstHandles, + Cpa16U cyInstHandlesSize, + Cpa16U *foundInstances) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_accel_dev_t **pAdfInsts = NULL; + icp_accel_dev_t *dev_addr = NULL; + sal_t *baseAddr = NULL; + sal_list_t *listTemp = NULL; + CpaInstanceHandle cyInstHandle; + CpaInstanceInfo2 info; + Cpa16U numDevices; + Cpa32U instCtr = 0; + Cpa32U i; + + /* Get the number of devices */ + status = icp_amgr_getNumInstances(&numDevices); + if (CPA_STATUS_SUCCESS != status) + return status; + + /* Allocate memory to store addr of accel_devs */ + pAdfInsts = + malloc(numDevices * sizeof(icp_accel_dev_t *), M_QAT_OCF, M_WAITOK); + + /* Get ADF to return all accel_devs that support either + * symmetric or asymmetric crypto */ + status = icp_amgr_getAllAccelDevByCapabilities( + (ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC), pAdfInsts, &numDevices); + if (CPA_STATUS_SUCCESS != status) { + free(pAdfInsts, M_QAT_OCF); + return status; + } + + for (i = 0; i < numDevices; i++) { + dev_addr = (icp_accel_dev_t *)pAdfInsts[i]; + baseAddr = dev_addr->pSalHandle; + if (NULL == baseAddr) + continue; + listTemp = baseAddr->sym_services; + while (NULL != listTemp) { + cyInstHandle = SalList_getObject(listTemp); + status = cpaCyInstanceGetInfo2(cyInstHandle, &info); + if (CPA_STATUS_SUCCESS != status) + continue; + listTemp = SalList_next(listTemp); + if (CPA_TRUE == info.isPolled) + continue; + if (instCtr >= cyInstHandlesSize) + break; + cyInstHandles[instCtr++] = cyInstHandle; + } + } + free(pAdfInsts, M_QAT_OCF); + *foundInstances = instCtr; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_start_instances(struct qat_ocf_softc *qat_softc, device_t dev) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa16U numInstances = 0; + CpaInstanceHandle cyInstHandles[QAT_OCF_MAX_INSTANCES] = { 0 }; + CpaInstanceHandle cyInstHandle = NULL; + Cpa32U startedInstances = 0; + Cpa32U i; + + qat_softc->numCyInstances = 0; + status = qat_ocf_get_irq_instances(cyInstHandles, + QAT_OCF_MAX_INSTANCES, + &numInstances); + if (CPA_STATUS_SUCCESS != status) + return status; + if (0 == numInstances) + return CPA_STATUS_RESOURCE; + + for (i = 0; i < numInstances; i++) { + struct qat_ocf_instance *qat_ocf_instance; + + cyInstHandle = cyInstHandles[i]; + if (!cyInstHandle) + continue; + + /* Starting instance */ + status = cpaCyStartInstance(cyInstHandle); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to get start instance\n"); + continue; + } + + status = + cpaCySetAddressTranslation(cyInstHandle, qatVirtToPhys); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to add virt to phys callback"); + goto fail; + } + + status = cpaCySymDpRegCbFunc(cyInstHandle, symDpCallback); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to add user callback\n"); + goto fail; + } + + qat_ocf_instance = &qat_softc->cyInstHandles[startedInstances]; + qat_ocf_instance->cyInstHandle = cyInstHandle; + mtx_init(&qat_ocf_instance->cyInstMtx, + "Instance MTX", + NULL, + MTX_DEF); + + /* Initialize cookie pool */ + status = qat_ocf_cookie_pool_init(qat_ocf_instance, dev); + if (CPA_STATUS_SUCCESS != status) { + device_printf(qat_softc->sc_dev, + "unable to create cookie pool\n"); + goto fail; + } + + qat_ocf_instance->driver_id = qat_softc->cryptodev_id; + + startedInstances++; + continue; + fail: + /* Stop instance */ + status = cpaCyStopInstance(cyInstHandle); + if (CPA_STATUS_SUCCESS != status) + device_printf(qat_softc->sc_dev, + "unable to stop the instance\n"); + continue; + } + qat_softc->numCyInstances = startedInstances; + + /* Success if at least one instance has been set */ + if (!qat_softc->numCyInstances) + return CPA_STATUS_FAIL; + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_stop_instances(struct qat_ocf_softc *qat_softc) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + int i; + + for (i = 0; i < qat_softc->numCyInstances; i++) { + struct qat_ocf_instance *qat_instance; + + qat_instance = &qat_softc->cyInstHandles[i]; + status = cpaCyStopInstance(qat_instance->cyInstHandle); + if (CPA_STATUS_SUCCESS != status) { + pr_err("QAT: stopping instance id: %d failed\n", i); + mtx_unlock(&qat_instance->cyInstMtx); + continue; + } + qat_ocf_cookie_pool_deinit(qat_instance); + mtx_destroy(&qat_instance->cyInstMtx); + } + + return status; +} + +static int +qat_ocf_attach(device_t dev) +{ + int status; + struct qat_ocf_softc *qat_softc; + int32_t cryptodev_id; + + qat_softc = device_get_softc(dev); + qat_softc->sc_dev = dev; + + cryptodev_id = crypto_get_driverid(dev, + sizeof(struct qat_ocf_dsession), + CRYPTOCAP_F_HARDWARE); + if (cryptodev_id < 0) { + device_printf(dev, "cannot initialize!\n"); + goto fail; + } + qat_softc->cryptodev_id = cryptodev_id; + + /* Starting instances for OCF */ + status = qat_ocf_start_instances(qat_softc, dev); + if (status) { + device_printf(dev, "no QAT IRQ instances available\n"); + goto fail; + } + + return 0; +fail: + qat_ocf_detach(dev); + + return (ENXIO); +} + +static int +qat_ocf_detach(device_t dev) +{ + struct qat_ocf_softc *qat_softc = NULL; + CpaStatus cpaStatus; + int status = 0; + + qat_softc = device_get_softc(dev); + + if (qat_softc->cryptodev_id >= 0) { + status = crypto_unregister_all(qat_softc->cryptodev_id); + if (status) + device_printf(dev, + "unable to unregister QAt backend\n"); + } + + /* Stop QAT instances */ + cpaStatus = qat_ocf_stop_instances(qat_softc); + if (CPA_STATUS_SUCCESS != cpaStatus) { + device_printf(dev, "unable to stop instances\n"); + status = EIO; + } + + return status; +} + +static device_method_t qat_ocf_methods[] = + { DEVMETHOD(device_identify, qat_ocf_identify), + DEVMETHOD(device_probe, qat_ocf_probe), + DEVMETHOD(device_attach, qat_ocf_attach), + DEVMETHOD(device_detach, qat_ocf_detach), + + /* Cryptodev interface */ + DEVMETHOD(cryptodev_probesession, qat_ocf_probesession), + DEVMETHOD(cryptodev_newsession, qat_ocf_newsession), + DEVMETHOD(cryptodev_freesession, qat_ocf_freesession), + DEVMETHOD(cryptodev_process, qat_ocf_process), + + DEVMETHOD_END }; + +static driver_t qat_ocf_driver = { + .name = "qat_ocf", + .methods = qat_ocf_methods, + .size = sizeof(struct qat_ocf_softc), +}; + + +DRIVER_MODULE_ORDERED(qat, + nexus, + qat_ocf_driver, + NULL, + NULL, + SI_ORDER_ANY); +MODULE_VERSION(qat, 1); +MODULE_DEPEND(qat, qat_c62x, 1, 1, 1); +MODULE_DEPEND(qat, qat_200xx, 1, 1, 1); +MODULE_DEPEND(qat, qat_c3xxx, 1, 1, 1); +MODULE_DEPEND(qat, qat_c4xxx, 1, 1, 1); +MODULE_DEPEND(qat, qat_dh895xcc, 1, 1, 1); +MODULE_DEPEND(qat, crypto, 1, 1, 1); +MODULE_DEPEND(qat, qat_common, 1, 1, 1); +MODULE_DEPEND(qat, qat_api, 1, 1, 1); +MODULE_DEPEND(qat, linuxkpi, 1, 1, 1); diff --git a/sys/dev/qat/qat/qat_ocf_mem_pool.c b/sys/dev/qat/qat/qat_ocf_mem_pool.c new file mode 100644 index 000000000000..5548b57c0471 --- /dev/null +++ b/sys/dev/qat/qat/qat_ocf_mem_pool.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include <sys/param.h> +#include <sys/systm.h> +#include <sys/bus.h> +#include <sys/kernel.h> +#include <sys/mbuf.h> +#include <sys/mutex.h> +#include <machine/bus.h> + +/* Cryptodev headers */ +#include <opencrypto/cryptodev.h> +#include <opencrypto/xform.h> + +/* QAT specific headers */ +#include "qat_ocf_mem_pool.h" +#include "qat_ocf_utils.h" +#include "cpa.h" + +/* Private functions */ +static void +qat_ocf_alloc_single_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error) +{ + struct qat_ocf_dma_mem *dma_mem; + + if (error != 0) + return; + + dma_mem = arg; + dma_mem->dma_seg = segs[0]; +} + +static int +qat_ocf_populate_buf_list_cb(struct qat_ocf_buffer_list *buffers, + bus_dma_segment_t *segs, + int niseg, + int skip_seg, + int skip_bytes) +{ + CpaPhysFlatBuffer *flatBuffer; + bus_addr_t segment_addr; + bus_size_t segment_len; + int iseg, oseg; + + for (iseg = 0, oseg = skip_seg; + iseg < niseg && oseg < QAT_OCF_MAX_FLATS; + iseg++) { + segment_addr = segs[iseg].ds_addr; + segment_len = segs[iseg].ds_len; + + if (skip_bytes > 0) { + if (skip_bytes < segment_len) { + segment_addr += skip_bytes; + segment_len -= skip_bytes; + skip_bytes = 0; + } else { + skip_bytes -= segment_len; + continue; + } + } + flatBuffer = &buffers->flatBuffers[oseg++]; + flatBuffer->dataLenInBytes = (Cpa32U)segment_len; + flatBuffer->bufferPhysAddr = (CpaPhysicalAddr)segment_addr; + }; + buffers->numBuffers = oseg; + + return iseg < niseg ? E2BIG : 0; +} + +void +qat_ocf_crypto_load_aadbuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_load_cb_arg *arg; + struct qat_ocf_cookie *qat_cookie; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + qat_cookie = arg->qat_cookie; + arg->error = qat_ocf_populate_buf_list_cb( + &qat_cookie->src_buffers, segs, nseg, 0, 0); +} + +void +qat_ocf_crypto_load_buf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_cookie *qat_cookie; + struct qat_ocf_load_cb_arg *arg; + int start_segment = 0, skip_bytes = 0; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + qat_cookie = arg->qat_cookie; + + skip_bytes = 0; + start_segment = qat_cookie->src_buffers.numBuffers; + + arg->error = qat_ocf_populate_buf_list_cb( + &qat_cookie->src_buffers, segs, nseg, start_segment, skip_bytes); +} + +void +qat_ocf_crypto_load_obuf_cb(void *_arg, + bus_dma_segment_t *segs, + int nseg, + int error) +{ + struct qat_ocf_load_cb_arg *arg; + struct cryptop *crp; + struct qat_ocf_cookie *qat_cookie; + const struct crypto_session_params *csp; + int osegs = 0, to_copy = 0; + + arg = _arg; + if (error != 0) { + arg->error = error; + return; + } + + crp = arg->crp_op; + qat_cookie = arg->qat_cookie; + csp = crypto_get_params(crp->crp_session); + + /* + * The payload must start at the same offset in the output SG list as in + * the input SG list. Copy over SG entries from the input corresponding + * to the AAD buffer. + */ + if (crp->crp_aad_length == 0 || + (CPA_TRUE == is_sep_aad_supported(csp) && crp->crp_aad)) { + arg->error = + qat_ocf_populate_buf_list_cb(&qat_cookie->dst_buffers, + segs, + nseg, + 0, + crp->crp_payload_output_start); + return; + } + + /* Copy AAD from source SGL to keep payload in the same position in + * destination buffers */ + if (NULL == crp->crp_aad) + to_copy = crp->crp_payload_start - crp->crp_aad_start; + else + to_copy = crp->crp_aad_length; + + for (; osegs < qat_cookie->src_buffers.numBuffers; osegs++) { + CpaPhysFlatBuffer *src_flat; + CpaPhysFlatBuffer *dst_flat; + int data_len; + + if (to_copy <= 0) + break; + + src_flat = &qat_cookie->src_buffers.flatBuffers[osegs]; + dst_flat = &qat_cookie->dst_buffers.flatBuffers[osegs]; + + dst_flat->bufferPhysAddr = src_flat->bufferPhysAddr; + data_len = imin(src_flat->dataLenInBytes, to_copy); + dst_flat->dataLenInBytes = data_len; + to_copy -= data_len; + } + + arg->error = + qat_ocf_populate_buf_list_cb(&qat_cookie->dst_buffers, + segs, + nseg, + osegs, + crp->crp_payload_output_start); +} + +static int +qat_ocf_alloc_dma_mem(device_t dev, + struct qat_ocf_dma_mem *dma_mem, + int nseg, + bus_size_t size, + bus_size_t alignment) +{ + int error; + + error = bus_dma_tag_create(bus_get_dma_tag(dev), + alignment, + 0, /* alignment, boundary */ + BUS_SPACE_MAXADDR, /* lowaddr */ + BUS_SPACE_MAXADDR, /* highaddr */ + NULL, + NULL, /* filter, filterarg */ + size, /* maxsize */ + nseg, /* nsegments */ + size, /* maxsegsize */ + BUS_DMA_COHERENT, /* flags */ + NULL, + NULL, /* lockfunc, lockarg */ + &dma_mem->dma_tag); + if (error != 0) { + device_printf(dev, + "couldn't create DMA tag, error = %d\n", + error); + return error; + } + + error = + bus_dmamem_alloc(dma_mem->dma_tag, + &dma_mem->dma_vaddr, + BUS_DMA_NOWAIT | BUS_DMA_ZERO | BUS_DMA_COHERENT, + &dma_mem->dma_map); + if (error != 0) { + device_printf(dev, + "couldn't allocate dmamem, error = %d\n", + error); + goto fail_0; + } + + error = bus_dmamap_load(dma_mem->dma_tag, + dma_mem->dma_map, + dma_mem->dma_vaddr, + size, + qat_ocf_alloc_single_cb, + dma_mem, + BUS_DMA_NOWAIT); + if (error) { + device_printf(dev, + "couldn't load dmamem map, error = %d\n", + error); + goto fail_1; + } + + return 0; +fail_1: + bus_dmamem_free(dma_mem->dma_tag, dma_mem->dma_vaddr, dma_mem->dma_map); +fail_0: + bus_dma_tag_destroy(dma_mem->dma_tag); + + return error; +} + +static void +qat_ocf_free_dma_mem(struct qat_ocf_dma_mem *qdm) +{ + if (qdm->dma_tag != NULL && qdm->dma_vaddr != NULL) { + bus_dmamap_unload(qdm->dma_tag, qdm->dma_map); + bus_dmamem_free(qdm->dma_tag, qdm->dma_vaddr, qdm->dma_map); + bus_dma_tag_destroy(qdm->dma_tag); + explicit_bzero(qdm, sizeof(*qdm)); + } +} + +static int +qat_ocf_dma_tag_and_map(device_t dev, + struct qat_ocf_dma_mem *dma_mem, + bus_size_t size, + bus_size_t segs) +{ + int error; + + error = bus_dma_tag_create(bus_get_dma_tag(dev), + 1, + 0, /* alignment, boundary */ + BUS_SPACE_MAXADDR, /* lowaddr */ + BUS_SPACE_MAXADDR, /* highaddr */ + NULL, + NULL, /* filter, filterarg */ + size, /* maxsize */ + segs, /* nsegments */ + size, /* maxsegsize */ + BUS_DMA_COHERENT, /* flags */ + NULL, + NULL, /* lockfunc, lockarg */ + &dma_mem->dma_tag); + if (error != 0) + return error; + + error = bus_dmamap_create(dma_mem->dma_tag, + BUS_DMA_COHERENT, + &dma_mem->dma_map); + if (error != 0) + return error; + + return 0; +} + +static void +qat_ocf_clear_cookie(struct qat_ocf_cookie *qat_cookie) +{ + qat_cookie->src_buffers.numBuffers = 0; + qat_cookie->dst_buffers.numBuffers = 0; + qat_cookie->is_sep_aad_used = CPA_FALSE; + explicit_bzero(qat_cookie->qat_ocf_iv_buf, + sizeof(qat_cookie->qat_ocf_iv_buf)); + explicit_bzero(qat_cookie->qat_ocf_digest, + sizeof(qat_cookie->qat_ocf_digest)); + explicit_bzero(qat_cookie->qat_ocf_gcm_aad, + sizeof(qat_cookie->qat_ocf_gcm_aad)); + qat_cookie->crp_op = NULL; +} + +/* Public functions */ +CpaStatus +qat_ocf_cookie_dma_pre_sync(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + if (NULL == pOpData->pCallbackTag) + return CPA_STATUS_FAIL; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + + if (CPA_TRUE == qat_cookie->is_sep_aad_used) { + bus_dmamap_sync(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + } + + bus_dmamap_sync(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + bus_dmamap_sync(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + } + bus_dmamap_sync(qat_cookie->dma_tag, + qat_cookie->dma_map, + BUS_DMASYNC_PREWRITE | BUS_DMASYNC_PREREAD); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_dma_post_sync(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + if (NULL == pOpData->pCallbackTag) + return CPA_STATUS_FAIL; + + qat_cookie = (struct qat_ocf_cookie *)pOpData->pCallbackTag; + + bus_dmamap_sync(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) { + bus_dmamap_sync(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + } + bus_dmamap_sync(qat_cookie->dma_tag, + qat_cookie->dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + if (qat_cookie->is_sep_aad_used) + bus_dmamap_sync(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_dma_unload(struct cryptop *crp, CpaCySymDpOpData *pOpData) +{ + struct qat_ocf_cookie *qat_cookie; + + qat_cookie = pOpData->pCallbackTag; + + if (NULL == qat_cookie) + return CPA_STATUS_FAIL; + + bus_dmamap_unload(qat_cookie->src_dma_mem.dma_tag, + qat_cookie->src_dma_mem.dma_map); + if (CRYPTO_HAS_OUTPUT_BUFFER(crp)) + bus_dmamap_unload(qat_cookie->dst_dma_mem.dma_tag, + qat_cookie->dst_dma_mem.dma_map); + if (qat_cookie->is_sep_aad_used) + bus_dmamap_unload(qat_cookie->gcm_aad_dma_mem.dma_tag, + qat_cookie->gcm_aad_dma_mem.dma_map); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +qat_ocf_cookie_pool_init(struct qat_ocf_instance *instance, device_t dev) +{ + int i, error = 0; + + mtx_init(&instance->cookie_pool_mtx, + "QAT cookie pool MTX", + NULL, + MTX_DEF); + instance->free_cookie_ptr = 0; + for (i = 0; i < QAT_OCF_MEM_POOL_SIZE; i++) { + struct qat_ocf_cookie *qat_cookie; + struct qat_ocf_dma_mem *entry_dma_mem; + + entry_dma_mem = &instance->cookie_dmamem[i]; + + /* Allocate DMA segment for cache entry. + * Cache has to be stored in DMAable mem due to + * it contains i.a src and dst flat buffer + * lists. + */ + error = qat_ocf_alloc_dma_mem(dev, + entry_dma_mem, + 1, + sizeof(struct qat_ocf_cookie), + (1 << 6)); + if (error) + break; + + qat_cookie = entry_dma_mem->dma_vaddr; + instance->cookie_pool[i] = qat_cookie; + + qat_cookie->dma_map = entry_dma_mem->dma_map; + qat_cookie->dma_tag = entry_dma_mem->dma_tag; + + qat_ocf_clear_cookie(qat_cookie); + + /* Physical address of IV buffer */ + qat_cookie->qat_ocf_iv_buf_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_iv_buf); + + /* Physical address of digest buffer */ + qat_cookie->qat_ocf_digest_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_digest); + + /* Physical address of AAD buffer */ + qat_cookie->qat_ocf_gcm_aad_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, qat_ocf_gcm_aad); + + /* We already got physical address of src and dest SGL header */ + qat_cookie->src_buffer_list_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, src_buffers); + + qat_cookie->dst_buffer_list_paddr = + entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, dst_buffers); + + /* We already have physical address of pOpdata */ + qat_cookie->pOpData_paddr = entry_dma_mem->dma_seg.ds_addr + + offsetof(struct qat_ocf_cookie, pOpdata); + /* Init QAT DP API OP data with const values */ + qat_cookie->pOpdata.pCallbackTag = (void *)qat_cookie; + qat_cookie->pOpdata.thisPhys = + (CpaPhysicalAddr)qat_cookie->pOpData_paddr; + + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->src_dma_mem, + QAT_OCF_MAXLEN, + QAT_OCF_MAX_FLATS); + if (error) + break; + + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->dst_dma_mem, + QAT_OCF_MAXLEN, + QAT_OCF_MAX_FLATS); + if (error) + break; + + /* Max one flat buffer for embedded AAD if provided as separated + * by OCF and it's not supported by QAT */ + error = qat_ocf_dma_tag_and_map(dev, + &qat_cookie->gcm_aad_dma_mem, + QAT_OCF_MAXLEN, + 1); + if (error) + break; + + instance->free_cookie[i] = qat_cookie; + instance->free_cookie_ptr++; + } + + return error; +} + +CpaStatus +qat_ocf_cookie_alloc(struct qat_ocf_instance *qat_instance, + struct qat_ocf_cookie **cookie_out) +{ + mtx_lock(&qat_instance->cookie_pool_mtx); + if (qat_instance->free_cookie_ptr == 0) { + mtx_unlock(&qat_instance->cookie_pool_mtx); + return CPA_STATUS_FAIL; + } + *cookie_out = + qat_instance->free_cookie[--qat_instance->free_cookie_ptr]; + mtx_unlock(&qat_instance->cookie_pool_mtx); + + return CPA_STATUS_SUCCESS; +} + +void +qat_ocf_cookie_free(struct qat_ocf_instance *qat_instance, + struct qat_ocf_cookie *cookie) +{ + qat_ocf_clear_cookie(cookie); + mtx_lock(&qat_instance->cookie_pool_mtx); + qat_instance->free_cookie[qat_instance->free_cookie_ptr++] = cookie; + mtx_unlock(&qat_instance->cookie_pool_mtx); +} + +void +qat_ocf_cookie_pool_deinit(struct qat_ocf_instance *qat_instance) +{ + int i; + + for (i = 0; i < QAT_OCF_MEM_POOL_SIZE; i++) { + struct qat_ocf_cookie *cookie; + struct qat_ocf_dma_mem *cookie_dma; + + cookie = qat_instance->cookie_pool[i]; + if (NULL == cookie) + continue; + + /* Destroy tag and map for source SGL */ + if (cookie->src_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->src_dma_mem.dma_tag, + cookie->src_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->src_dma_mem.dma_tag); + } + + /* Destroy tag and map for dest SGL */ + if (cookie->dst_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->dst_dma_mem.dma_tag, + cookie->dst_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->dst_dma_mem.dma_tag); + } + + /* Destroy tag and map for separated AAD */ + if (cookie->gcm_aad_dma_mem.dma_tag) { + bus_dmamap_destroy(cookie->gcm_aad_dma_mem.dma_tag, + cookie->gcm_aad_dma_mem.dma_map); + bus_dma_tag_destroy(cookie->gcm_aad_dma_mem.dma_tag); + } + + /* Free DMA memory */ + cookie_dma = &qat_instance->cookie_dmamem[i]; + qat_ocf_free_dma_mem(cookie_dma); + qat_instance->cookie_pool[i] = NULL; + } + mtx_destroy(&qat_instance->cookie_pool_mtx); + + return; +} diff --git a/sys/dev/qat/qat/qat_ocf_utils.c b/sys/dev/qat/qat/qat_ocf_utils.c new file mode 100644 index 000000000000..64a10128b985 --- /dev/null +++ b/sys/dev/qat/qat/qat_ocf_utils.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/* System headers */ +#include <sys/param.h> +#include <sys/systm.h> +#include <sys/bus.h> +#include <sys/kernel.h> +#include <sys/mutex.h> +#include <sys/timespec.h> + +/* QAT specific headers */ +#include "qat_ocf_utils.h" +#include "cpa.h" +#include "lac_common.h" +#include "lac_log.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "lac_list.h" +#include "lac_sym.h" +#include "lac_sym_qat.h" +#include "lac_sal.h" +#include "lac_sal_ctrl.h" +#include "lac_session.h" +#include "lac_sym_cipher.h" +#include "lac_sym_hash.h" +#include "lac_sym_alg_chain.h" +#include "lac_sym_stats.h" +#include "lac_sym_partial.h" +#include "lac_sym_qat_hash_defs_lookup.h" + +#define QAT_OCF_AAD_NOCHANGE (-1) + +CpaStatus +qat_ocf_wait_for_session(CpaCySymSessionCtx sessionCtx, Cpa32U timeoutMS) +{ + CpaBoolean sessionInUse = CPA_TRUE; + CpaStatus status; + struct timespec start_ts; + struct timespec current_ts; + struct timespec delta; + u64 delta_ms; + + nanotime(&start_ts); + for (;;) { + status = cpaCySymSessionInUse(sessionCtx, &sessionInUse); + if (CPA_STATUS_SUCCESS != status) + return CPA_STATUS_FAIL; + if (CPA_FALSE == sessionInUse) + break; + nanotime(¤t_ts); + delta = timespec_sub(current_ts, start_ts); + delta_ms = (delta.tv_sec * 1000) + + (delta.tv_nsec / NSEC_PER_MSEC); + if (delta_ms > (timeoutMS)) + return CPA_STATUS_RESOURCE; + qatUtilsYield(); + } + + return CPA_STATUS_SUCCESS; +} + +static CpaStatus +qat_ocf_session_update(struct qat_ocf_session *ocf_session, + Cpa8U *newCipher, + Cpa8U *newAuth, + Cpa32U newAADLength) +{ + lac_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaBoolean sessionInUse = CPA_TRUE; + + if (!ocf_session->sessionCtx) + return CPA_STATUS_SUCCESS; + + status = cpaCySymSessionInUse(ocf_session->sessionCtx, &sessionInUse); + if (CPA_TRUE == sessionInUse) + return CPA_STATUS_RESOURCE; + + pSessionDesc = + LAC_SYM_SESSION_DESC_FROM_CTX_GET(ocf_session->sessionCtx); + + if (newAADLength != QAT_OCF_AAD_NOCHANGE) { + ocf_session->aadLen = newAADLength; + status = + LacAlgChain_SessionAADUpdate(pSessionDesc, newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + if (newCipher) { + status = + LacAlgChain_SessionCipherKeyUpdate(pSessionDesc, newCipher); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + if (newAuth) { + status = + LacAlgChain_SessionAuthKeyUpdate(pSessionDesc, newAuth); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + return status; +} + +CpaStatus +qat_ocf_handle_session_update(struct qat_ocf_dsession *ocf_dsession, + struct cryptop *crp) +{ + Cpa32U newAADLength = QAT_OCF_AAD_NOCHANGE; + Cpa8U *cipherKey; + Cpa8U *authKey; + crypto_session_t cses; + const struct crypto_session_params *csp; + CpaStatus status = CPA_STATUS_SUCCESS; + + if (!ocf_dsession) + return CPA_STATUS_FAIL; + + cses = crp->crp_session; + if (!cses) + return CPA_STATUS_FAIL; + csp = crypto_get_params(cses); + if (!csp) + return CPA_STATUS_FAIL; + + cipherKey = crp->crp_cipher_key; + authKey = crp->crp_auth_key; + + if (is_sep_aad_supported(csp)) { + /* Determine if AAD has change */ + if ((ocf_dsession->encSession.sessionCtx && + ocf_dsession->encSession.aadLen != crp->crp_aad_length) || + (ocf_dsession->decSession.sessionCtx && + ocf_dsession->decSession.aadLen != crp->crp_aad_length)) { + newAADLength = crp->crp_aad_length; + + /* Get auth and cipher keys from session if not present + * in the request. Update keys is required to update + * AAD. + */ + if (!authKey) + authKey = csp->csp_auth_key; + if (!cipherKey) + cipherKey = csp->csp_cipher_key; + } + if (!authKey) + authKey = cipherKey; + } + + if (crp->crp_cipher_key || crp->crp_auth_key || + newAADLength != QAT_OCF_AAD_NOCHANGE) { + /* Update encryption session */ + status = qat_ocf_session_update(&ocf_dsession->encSession, + cipherKey, + authKey, + newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + /* Update decryption session */ + status = qat_ocf_session_update(&ocf_dsession->decSession, + cipherKey, + authKey, + newAADLength); + if (CPA_STATUS_SUCCESS != status) + return status; + } + + return status; +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_buffers.c b/sys/dev/qat/qat_api/common/compression/dc_buffers.c new file mode 100644 index 000000000000..1a5d9bc8973e --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_buffers.c @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_buffers.c + * + * @defgroup Dc_DataCompression DC Data Compression + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the buffer management operations for + * Data Compression service. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_bp.h" + +#include "sal_types_compression.h" +#include "icp_qat_fw_comp.h" + +#define CPA_DC_CEIL_DIV(x, y) (((x) + (y)-1) / (y)) +#define DC_DEST_BUFF_EXTRA_DEFLATE_GEN2 (55) + +CpaStatus +cpaDcBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numBuffers, + Cpa32U *pSizeInBytes) +{ + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == instanceHandle) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = instanceHandle; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + LAC_CHECK_NULL_PARAM(pSizeInBytes); + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + if (0 == numBuffers) { + QAT_UTILS_LOG("Number of buffers is 0.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + *pSizeInBytes = (sizeof(icp_buffer_list_desc_t) + + (sizeof(icp_flat_buffer_desc_t) * (numBuffers + 1)) + + ICP_DESCRIPTOR_ALIGNMENT_BYTES); + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcBnpBufferListGetMetaSize(const CpaInstanceHandle instanceHandle, + Cpa32U numJobs, + Cpa32U *pSizeInBytes) +{ + return CPA_STATUS_UNSUPPORTED; +} + +static inline CpaStatus +dcDeflateBoundGen2(CpaDcHuffType huffType, Cpa32U inputSize, Cpa32U *outputSize) +{ + /* Formula for GEN2 deflate: + * ceil(9 * Total input bytes / 8) + 55 bytes. + * 55 bytes is the skid pad value for GEN2 devices. + */ + *outputSize = + CPA_DC_CEIL_DIV(9 * inputSize, 8) + DC_DEST_BUFF_EXTRA_DEFLATE_GEN2; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDeflateCompressBound(const CpaInstanceHandle dcInstance, + CpaDcHuffType huffType, + Cpa32U inputSize, + Cpa32U *outputSize) +{ + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + LAC_CHECK_NULL_PARAM(outputSize); + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + if (!inputSize) { + QAT_UTILS_LOG( + "The input size needs to be greater than zero.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_DC_HT_STATIC != huffType) && + (CPA_DC_HT_FULL_DYNAMIC != huffType)) { + QAT_UTILS_LOG("Invalid huffType value.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + return dcDeflateBoundGen2(huffType, inputSize, outputSize); +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_datapath.c b/sys/dev/qat/qat_api/common/compression/dc_datapath.c new file mode 100644 index 000000000000..0e2aa9f389e2 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_datapath.c @@ -0,0 +1,1790 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_datapath.c + * + * @defgroup Dc_DataCompression DC Data Compression + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression datapath operations. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_dp.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_session.h" +#include "dc_datapath.h" +#include "sal_statistics.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "dc_stats.h" +#include "lac_buffer_desc.h" +#include "lac_sal.h" +#include "lac_log.h" +#include "lac_sync.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" +#include "dc_error_counter.h" +#define DC_COMP_MAX_BUFF_SIZE (1024 * 64) + +static QatUtilsAtomic dcErrorCount[MAX_DC_ERROR_TYPE]; + +void +dcErrorLog(CpaDcReqStatus dcError) +{ + Cpa32U absError = 0; + + absError = abs(dcError); + if ((dcError < CPA_DC_OK) && (absError < MAX_DC_ERROR_TYPE)) { + qatUtilsAtomicInc(&(dcErrorCount[absError])); + } +} + +Cpa64U +getDcErrorCounter(CpaDcReqStatus dcError) +{ + Cpa32U absError = 0; + + absError = abs(dcError); + if (!(dcError >= CPA_DC_OK || dcError < CPA_DC_EMPTY_DYM_BLK)) { + return (Cpa64U)qatUtilsAtomicGet(&dcErrorCount[absError]); + } + + return 0; +} + +void +dcCompression_ProcessCallback(void *pRespMsg) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + icp_qat_fw_comp_resp_t *pCompRespMsg = NULL; + void *callbackTag = NULL; + Cpa64U *pReqData = NULL; + CpaDcDpOpData *pResponse = NULL; + CpaDcRqResults *pResults = NULL; + CpaDcCallbackFn pCbFunc = NULL; + dc_session_desc_t *pSessionDesc = NULL; + sal_compression_service_t *pService = NULL; + dc_compression_cookie_t *pCookie = NULL; + CpaDcOpData *pOpData = NULL; + CpaBoolean cmpPass = CPA_TRUE, xlatPass = CPA_TRUE; + CpaBoolean verifyHwIntegrityCrcs = CPA_FALSE; + Cpa8U cmpErr = ERR_CODE_NO_ERROR, xlatErr = ERR_CODE_NO_ERROR; + dc_request_dir_t compDecomp = DC_COMPRESSION_REQUEST; + Cpa8U opStatus = ICP_QAT_FW_COMN_STATUS_FLAG_OK; + Cpa8U hdrFlags = 0; + + /* Cast response message to compression response message type */ + pCompRespMsg = (icp_qat_fw_comp_resp_t *)pRespMsg; + + /* Extract request data pointer from the opaque data */ + LAC_MEM_SHARED_READ_TO_PTR(pCompRespMsg->opaque_data, pReqData); + + /* Extract fields from the request data structure */ + pCookie = (dc_compression_cookie_t *)pReqData; + if (!pCookie) + return; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pCookie->pSessionHandle); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + pResponse = (CpaDcDpOpData *)pReqData; + pResults = &(pResponse->results); + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + compDecomp = DC_DECOMPRESSION_REQUEST; + } + } else { + pSessionDesc = pCookie->pSessionDesc; + pResults = pCookie->pResults; + callbackTag = pCookie->callbackTag; + pCbFunc = pCookie->pSessionDesc->pCompressionCb; + compDecomp = pCookie->compDecomp; + pOpData = pCookie->pDcOpData; + } + + pService = (sal_compression_service_t *)(pCookie->dcInstance); + + opStatus = pCompRespMsg->comn_resp.comn_status; + + if (NULL != pOpData) { + verifyHwIntegrityCrcs = pOpData->verifyHwIntegrityCrcs; + } + + hdrFlags = pCompRespMsg->comn_resp.hdr_flags; + + /* Get the cmp error code */ + cmpErr = pCompRespMsg->comn_resp.comn_error.s1.cmp_err_code; + if (ICP_QAT_FW_COMN_RESP_UNSUPPORTED_REQUEST_STAT_GET(opStatus)) { + /* Compression not supported by firmware, set produced/consumed + to zero + and call the cb function with status CPA_STATUS_UNSUPPORTED + */ + QAT_UTILS_LOG("Compression feature not supported\n"); + status = CPA_STATUS_UNSUPPORTED; + pResults->status = (Cpa8S)cmpErr; + pResults->consumed = 0; + pResults->produced = 0; + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = + CPA_STATUS_UNSUPPORTED; + (pService->pDcDpCb)(pResponse); + } else { + /* Free the memory pool */ + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + if (NULL != pCbFunc) { + pCbFunc(callbackTag, status); + } + } + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompletedErrors, pService); + } else { + COMPRESSION_STAT_INC(numDecompCompletedErrors, + pService); + } + return; + } else { + /* Check compression response status */ + cmpPass = + (CpaBoolean)(ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_CMP_STAT_GET(opStatus)); + } + + if (CPA_DC_INCOMPLETE_FILE_ERR == (Cpa8S)cmpErr) { + cmpPass = CPA_TRUE; + cmpErr = ERR_CODE_NO_ERROR; + } + /* log the slice hang and endpoint push/pull error inside the response + */ + if (ERR_CODE_SSM_ERROR == (Cpa8S)cmpErr) { + QAT_UTILS_LOG( + "Slice hang detected on the compression slice.\n"); + } else if (ERR_CODE_ENDPOINT_ERROR == (Cpa8S)cmpErr) { + QAT_UTILS_LOG( + "PCIe End Point Push/Pull or TI/RI Parity error detected.\n"); + } + + /* We return the compression error code for now. We would need to update + * the API if we decide to return both error codes */ + pResults->status = (Cpa8S)cmpErr; + + /* Check the translator status */ + if ((DC_COMPRESSION_REQUEST == compDecomp) && + (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType)) { + /* Check translator response status */ + xlatPass = + (CpaBoolean)(ICP_QAT_FW_COMN_STATUS_FLAG_OK == + ICP_QAT_FW_COMN_RESP_XLAT_STAT_GET(opStatus)); + + /* Get the translator error code */ + xlatErr = pCompRespMsg->comn_resp.comn_error.s1.xlat_err_code; + + /* Return a fatal error or a potential error in the translator + * slice + * if the compression slice did not return any error */ + if ((CPA_DC_OK == pResults->status) || + (CPA_DC_FATALERR == (Cpa8S)xlatErr)) { + pResults->status = (Cpa8S)xlatErr; + } + } + /* Update dc error counter */ + dcErrorLog(pResults->status); + + if (CPA_FALSE == pSessionDesc->isDcDp) { + /* In case of any error for an end of packet request, we need to + * update + * the request type for the following request */ + if (CPA_DC_FLUSH_FINAL == pCookie->flushFlag && cmpPass && + xlatPass) { + pSessionDesc->requestType = DC_REQUEST_FIRST; + } else { + pSessionDesc->requestType = DC_REQUEST_SUBSEQUENT; + } + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) || + ((CPA_DC_STATELESS == pSessionDesc->sessState) && + (DC_COMPRESSION_REQUEST == compDecomp))) { + /* Overflow is a valid use case for Traditional API + * only. + * Stateful Overflow is supported in both compression + * and + * decompression direction. + * Stateless Overflow is supported only in compression + * direction. + */ + if (CPA_DC_OVERFLOW == (Cpa8S)cmpErr) + cmpPass = CPA_TRUE; + + if (CPA_DC_OVERFLOW == (Cpa8S)xlatErr) { + xlatPass = CPA_TRUE; + } + } + } else { + if (CPA_DC_OVERFLOW == (Cpa8S)cmpErr) { + cmpPass = CPA_FALSE; + } + if (CPA_DC_OVERFLOW == (Cpa8S)xlatErr) { + xlatPass = CPA_FALSE; + } + } + + if ((CPA_TRUE == cmpPass) && (CPA_TRUE == xlatPass)) { + /* Extract the response from the firmware */ + pResults->consumed = + pCompRespMsg->comp_resp_pars.input_byte_counter; + pResults->produced = + pCompRespMsg->comp_resp_pars.output_byte_counter; + pSessionDesc->cumulativeConsumedBytes += pResults->consumed; + + if (CPA_DC_OVERFLOW != (Cpa8S)xlatErr) { + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + pResults->checksum = + pCompRespMsg->comp_resp_pars.crc.legacy + .curr_crc32; + } else if (CPA_DC_ADLER32 == + pSessionDesc->checksumType) { + pResults->checksum = + pCompRespMsg->comp_resp_pars.crc.legacy + .curr_adler_32; + } + pSessionDesc->previousChecksum = pResults->checksum; + } + + if (DC_DECOMPRESSION_REQUEST == compDecomp) { + pResults->endOfLastBlock = + (ICP_QAT_FW_COMN_STATUS_CMP_END_OF_LAST_BLK_FLAG_SET == + ICP_QAT_FW_COMN_RESP_CMP_END_OF_LAST_BLK_FLAG_GET( + opStatus)); + } + + /* Save the checksum for the next request */ + if ((CPA_DC_OVERFLOW != (Cpa8S)xlatErr) && + (CPA_TRUE == verifyHwIntegrityCrcs)) { + pSessionDesc->previousChecksum = + pSessionDesc->seedSwCrc.swCrcI; + } + + /* Check if a CNV recovery happened and + * increase stats counter + */ + if ((DC_COMPRESSION_REQUEST == compDecomp) && + ICP_QAT_FW_COMN_HDR_CNV_FLAG_GET(hdrFlags) && + ICP_QAT_FW_COMN_HDR_CNVNR_FLAG_GET(hdrFlags)) { + COMPRESSION_STAT_INC(numCompCnvErrorsRecovered, + pService); + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = CPA_STATUS_SUCCESS; + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompleted, + pService); + } else { + COMPRESSION_STAT_INC(numDecompCompleted, + pService); + } + } + } else { + pResults->consumed = 0; + pResults->produced = 0; + if (CPA_DC_OVERFLOW == pResults->status && + CPA_DC_STATELESS == pSessionDesc->sessState) { + /* This error message will be returned by Data Plane API + * in both + * compression and decompression direction. With + * Traditional API + * this error message will be returned only in stateless + * decompression direction */ + QAT_UTILS_LOG( + "Unrecoverable error: stateless overflow. You may need to increase the size of your destination buffer.\n"); + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + if (pResponse) + pResponse->responseStatus = CPA_STATUS_FAIL; + } else { + if (CPA_DC_OK != pResults->status && + CPA_DC_INCOMPLETE_FILE_ERR != pResults->status) { + status = CPA_STATUS_FAIL; + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompCompletedErrors, + pService); + } else { + COMPRESSION_STAT_INC(numDecompCompletedErrors, + pService); + } + } + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + /* Decrement number of stateless pending callbacks for session + */ + pSessionDesc->pendingDpStatelessCbCount--; + (pService->pDcDpCb)(pResponse); + } else { + /* Decrement number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicDec( + &(pCookie->pSessionDesc->pendingStatelessCbCount)); + } else if (0 != + qatUtilsAtomicGet(&pCookie->pSessionDesc + ->pendingStatefulCbCount)) { + qatUtilsAtomicDec( + &(pCookie->pSessionDesc->pendingStatefulCbCount)); + } + + /* Free the memory pool */ + if (NULL != pCookie) { + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + } + + if (NULL != pCbFunc) { + pCbFunc(callbackTag, status); + } + } +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check that all the parameters in the pOpData structure are valid + * + * @description + * Check that all the parameters in the pOpData structure are valid + * + * @param[in] pService Pointer to the compression service + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 and + * CpaDcDecompressData2 + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckOpData(sal_compression_service_t *pService, CpaDcOpData *pOpData) +{ + CpaDcSkipMode skipMode = 0; + + if ((pOpData->flushFlag < CPA_DC_FLUSH_NONE) || + (pOpData->flushFlag > CPA_DC_FLUSH_FULL)) { + LAC_INVALID_PARAM_LOG("Invalid flushFlag value"); + return CPA_STATUS_INVALID_PARAM; + } + + skipMode = pOpData->inputSkipData.skipMode; + if ((skipMode < CPA_DC_SKIP_DISABLED) || + (skipMode > CPA_DC_SKIP_STRIDE)) { + LAC_INVALID_PARAM_LOG("Invalid input skip mode value"); + return CPA_STATUS_INVALID_PARAM; + } + + skipMode = pOpData->outputSkipData.skipMode; + if ((skipMode < CPA_DC_SKIP_DISABLED) || + (skipMode > CPA_DC_SKIP_STRIDE)) { + LAC_INVALID_PARAM_LOG("Invalid output skip mode value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->integrityCrcCheck == CPA_FALSE && + pOpData->verifyHwIntegrityCrcs == CPA_TRUE) { + LAC_INVALID_PARAM_LOG( + "integrityCrcCheck must be set to true" + "in order to enable verifyHwIntegrityCrcs"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->integrityCrcCheck != CPA_TRUE && + pOpData->integrityCrcCheck != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid integrityCrcCheck value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->verifyHwIntegrityCrcs != CPA_TRUE && + pOpData->verifyHwIntegrityCrcs != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid verifyHwIntegrityCrcs value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->compressAndVerify != CPA_TRUE && + pOpData->compressAndVerify != CPA_FALSE) { + LAC_INVALID_PARAM_LOG("Invalid cnv decompress check value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_TRUE == pOpData->integrityCrcCheck && + CPA_FALSE == pService->generic_service_info.integrityCrcCheck) { + LAC_INVALID_PARAM_LOG("Integrity CRC check is not " + "supported on this device"); + return CPA_STATUS_INVALID_PARAM; + } + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check the compression source buffer for Batch and Pack API. + * + * @description + * Check that all the parameters used for Pack compression + * request are valid. This function essentially checks the source buffer + * parameters and results structure parameters. + * + * @param[in] pSessionHandle Session handle + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space allocated for + * output data + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] srcBuffSize Size of the source buffer + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckSourceData(CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + Cpa64U srcBuffSize, + CpaDcSkipData *skipData) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSrcBuff); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pResults); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (NULL == pSessionDesc) { + LAC_INVALID_PARAM_LOG("Session handle not as expected"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((flushFlag < CPA_DC_FLUSH_NONE) || + (flushFlag > CPA_DC_FLUSH_FULL)) { + LAC_INVALID_PARAM_LOG("Invalid flushFlag value"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pSrcBuff == pDestBuff) { + LAC_INVALID_PARAM_LOG("In place operation not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Compressing zero bytes is not supported for stateless sessions + * for non Batch and Pack requests */ + if ((CPA_DC_STATELESS == pSessionDesc->sessState) && + (0 == srcBuffSize) && (NULL == skipData)) { + LAC_INVALID_PARAM_LOG( + "The source buffer size needs to be greater than " + "zero bytes for stateless sessions"); + return CPA_STATUS_INVALID_PARAM; + } + + if (srcBuffSize > DC_BUFFER_MAX_SIZE) { + LAC_INVALID_PARAM_LOG( + "The source buffer size needs to be less than or " + "equal to 2^32-1 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check the compression or decompression function parameters. + * + * @description + * Check that all the parameters used for a Batch and Pack compression + * request are valid. This function essentially checks the destination + * buffer parameters and intermediate buffer parameters. + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionHandle Session handle + * @param[in] pDestBuff Pointer to buffer space allocated for + * output data + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCheckDestinationData(sal_compression_service_t *pService, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pDestBuff, + dc_request_dir_t compDecomp) +{ + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U destBuffSize = 0; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (NULL == pSessionDesc) { + LAC_INVALID_PARAM_LOG("Session handle not as expected"); + return CPA_STATUS_INVALID_PARAM; + } + + if (LacBuffDesc_BufferListVerify(pDestBuff, + &destBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG( + "Invalid destination buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + if (destBuffSize > DC_BUFFER_MAX_SIZE) { + LAC_INVALID_PARAM_LOG( + "The destination buffer size needs to be less " + "than or equal to 2^32-1 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_TRUE == pSessionDesc->isDcDp) { + LAC_INVALID_PARAM_LOG( + "The session type should not be data plane"); + return CPA_STATUS_INVALID_PARAM; + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + + /* Check if intermediate buffers are supported */ + if ((0 == pService->pInterBuffPtrsArrayPhyAddr) || + (NULL == pService->pInterBuffPtrsArray)) { + LAC_LOG_ERROR( + "No intermediate buffer defined for this instance " + "- see cpaDcStartInstance"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the destination buffer size is greater or + * equal to 128B */ + if (destBuffSize < DC_DEST_BUFFER_DYN_MIN_SIZE) { + LAC_INVALID_PARAM_LOG( + "Destination buffer size should be " + "greater or equal to 128B"); + return CPA_STATUS_INVALID_PARAM; + } + } else + { + /* Ensure that the destination buffer size is greater or + * equal to devices min output buff size */ + if (destBuffSize < + pService->comp_device_data.minOutputBuffSize) { + LAC_INVALID_PARAM_LOG1( + "Destination buffer size should be " + "greater or equal to %d bytes", + pService->comp_device_data + .minOutputBuffSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } else { + /* Ensure that the destination buffer size is greater than + * 0 bytes */ + if (destBuffSize < DC_DEST_BUFFER_DEC_MIN_SIZE) { + LAC_INVALID_PARAM_LOG( + "Destination buffer size should be " + "greater than 0 bytes"); + return CPA_STATUS_INVALID_PARAM; + } + } + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression request parameters + * + * @description + * This function will populate the compression request parameters + * + * @param[out] pCompReqParams Pointer to the compression request parameters + * @param[in] pCookie Pointer to the compression cookie + * + *****************************************************************************/ +static void +dcCompRequestParamsPopulate(icp_qat_fw_comp_req_params_t *pCompReqParams, + dc_compression_cookie_t *pCookie) +{ + pCompReqParams->comp_len = pCookie->srcTotalDataLenInBytes; + pCompReqParams->out_buffer_sz = pCookie->dstTotalDataLenInBytes; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Create the requests for compression or decompression + * + * @description + * Create the requests for compression or decompression. This function + * will update the cookie will all required information. + * + * @param{out] pCookie Pointer to the compression cookie + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in pSessionHandle Session handle + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space for data after + * compression + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 + * and CpaDcDecompressData2 + * @param[in] callbackTag Pointer to the callback tag + * @param[in] compDecomp Direction of the operation + * @param[in] compressAndVerify Compress and Verify + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcCreateRequest(dc_compression_cookie_t *pCookie, + sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + CpaDcOpData *pOpData, + void *callbackTag, + dc_request_dir_t compDecomp, + dc_cnv_mode_t cnvMode) +{ + icp_qat_fw_comp_req_t *pMsg = NULL; + icp_qat_fw_comp_req_params_t *pCompReqParams = NULL; + Cpa64U srcAddrPhys = 0, dstAddrPhys = 0; + Cpa64U srcTotalDataLenInBytes = 0, dstTotalDataLenInBytes = 0; + + Cpa32U rpCmdFlags = 0; + Cpa8U sop = ICP_QAT_FW_COMP_SOP; + Cpa8U eop = ICP_QAT_FW_COMP_EOP; + Cpa8U bFinal = ICP_QAT_FW_COMP_NOT_BFINAL; + Cpa8U crcMode = ICP_QAT_FW_COMP_CRC_MODE_LEGACY; + Cpa8U cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + Cpa8U cnvRecovery = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + CpaBoolean integrityCrcCheck = CPA_FALSE; + CpaStatus status = CPA_STATUS_SUCCESS; + CpaDcFlush flush = CPA_DC_FLUSH_NONE; + Cpa32U initial_adler = 1; + Cpa32U initial_crc32 = 0; + icp_qat_fw_comp_req_t *pReqCache = NULL; + + /* Write the buffer descriptors */ + status = LacBuffDesc_BufferListDescWriteAndGetSize( + pSrcBuff, + &srcAddrPhys, + CPA_FALSE, + &srcTotalDataLenInBytes, + &(pService->generic_service_info)); + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + status = LacBuffDesc_BufferListDescWriteAndGetSize( + pDestBuff, + &dstAddrPhys, + CPA_FALSE, + &dstTotalDataLenInBytes, + &(pService->generic_service_info)); + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + /* Populate the compression cookie */ + pCookie->dcInstance = pService; + pCookie->pSessionHandle = pSessionHandle; + pCookie->callbackTag = callbackTag; + pCookie->pSessionDesc = pSessionDesc; + pCookie->pDcOpData = pOpData; + pCookie->pResults = pResults; + pCookie->compDecomp = compDecomp; + pCookie->pUserSrcBuff = NULL; + pCookie->pUserDestBuff = NULL; + + /* Extract flush flag from either the opData or from the + * parameter. Opdata have been introduce with APIs + * cpaDcCompressData2 and cpaDcDecompressData2 */ + if (NULL != pOpData) { + flush = pOpData->flushFlag; + integrityCrcCheck = pOpData->integrityCrcCheck; + } else { + flush = flushFlag; + } + pCookie->flushFlag = flush; + + /* The firmware expects the length in bytes for source and destination + * to be Cpa32U parameters. However the total data length could be + * bigger as allocated by the user. We ensure that this is not the case + * in dcCheckSourceData and cast the values to Cpa32U here */ + pCookie->srcTotalDataLenInBytes = (Cpa32U)srcTotalDataLenInBytes; + if ((DC_COMPRESSION_REQUEST == compDecomp) && + (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType)) { + if (pService->minInterBuffSizeInBytes < + (Cpa32U)dstTotalDataLenInBytes) { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)(pService->minInterBuffSizeInBytes); + } else { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)dstTotalDataLenInBytes; + } + } else + { + pCookie->dstTotalDataLenInBytes = + (Cpa32U)dstTotalDataLenInBytes; + } + + /* Device can not decompress an odd byte decompression request + * if bFinal is not set + */ + if (CPA_TRUE != pService->comp_device_data.oddByteDecompNobFinal) { + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_FLUSH_FINAL != flushFlag) && + (DC_DECOMPRESSION_REQUEST == compDecomp) && + (pCookie->srcTotalDataLenInBytes & 0x1)) { + pCookie->srcTotalDataLenInBytes--; + } + } + /* Device can not decompress odd byte interim requests */ + if (CPA_TRUE != pService->comp_device_data.oddByteDecompInterim) { + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_FLUSH_FINAL != flushFlag) && + (CPA_DC_FLUSH_FULL != flushFlag) && + (DC_DECOMPRESSION_REQUEST == compDecomp) && + (pCookie->srcTotalDataLenInBytes & 0x1)) { + pCookie->srcTotalDataLenInBytes--; + } + } + + pMsg = (icp_qat_fw_comp_req_t *)&pCookie->request; + + if (DC_COMPRESSION_REQUEST == compDecomp) { + pReqCache = &(pSessionDesc->reqCacheComp); + } else { + pReqCache = &(pSessionDesc->reqCacheDecomp); + } + + /* Fills the msg from the template cached in the session descriptor */ + memcpy((void *)pMsg, + (void *)(pReqCache), + LAC_QAT_DC_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES); + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + initial_adler = 1; + initial_crc32 = 0; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + } else if (CPA_DC_STATELESS == pSessionDesc->sessState) { + pSessionDesc->previousChecksum = pResults->checksum; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + initial_adler = pSessionDesc->previousChecksum; + } else { + initial_crc32 = pSessionDesc->previousChecksum; + } + } + + /* Backup source and destination buffer addresses, + * CRC calculations both for CNV and translator overflow + * will be performed on them in the callback function. + */ + pCookie->pUserSrcBuff = pSrcBuff; + pCookie->pUserDestBuff = pDestBuff; + + /* + * Due to implementation of CNV support and need for backwards + * compatibility certain fields in the request and response structs had + * been changed, moved or placed in unions cnvMode flag signifies fields + * to be selected from req/res + * + * Doing extended crc checks makes sense only when we want to do the + * actual CNV + */ + if (CPA_TRUE == pService->generic_service_info.integrityCrcCheck && + CPA_TRUE == integrityCrcCheck) { + pMsg->comp_pars.crc.crc_data_addr = + pSessionDesc->physDataIntegrityCrcs; + crcMode = ICP_QAT_FW_COMP_CRC_MODE_E2E; + } else { + /* Legacy request structure */ + pMsg->comp_pars.crc.legacy.initial_adler = initial_adler; + pMsg->comp_pars.crc.legacy.initial_crc32 = initial_crc32; + crcMode = ICP_QAT_FW_COMP_CRC_MODE_LEGACY; + } + + /* Populate the cmdFlags */ + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + pSessionDesc->previousRequestType = pSessionDesc->requestType; + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Update the request type for following requests */ + pSessionDesc->requestType = DC_REQUEST_SUBSEQUENT; + + /* Reinitialise the cumulative amount of consumed bytes + */ + pSessionDesc->cumulativeConsumedBytes = 0; + + if (DC_COMPRESSION_REQUEST == compDecomp) { + pSessionDesc->isSopForCompressionProcessed = + CPA_TRUE; + } else if (DC_DECOMPRESSION_REQUEST == compDecomp) { + pSessionDesc->isSopForDecompressionProcessed = + CPA_TRUE; + } + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + if (CPA_TRUE == + pSessionDesc + ->isSopForCompressionProcessed) { + sop = ICP_QAT_FW_COMP_NOT_SOP; + } else { + pSessionDesc + ->isSopForCompressionProcessed = + CPA_TRUE; + } + } else if (DC_DECOMPRESSION_REQUEST == compDecomp) { + if (CPA_TRUE == + pSessionDesc + ->isSopForDecompressionProcessed) { + sop = ICP_QAT_FW_COMP_NOT_SOP; + } else { + pSessionDesc + ->isSopForDecompressionProcessed = + CPA_TRUE; + } + } + } + + if ((CPA_DC_FLUSH_FINAL == flush) || + (CPA_DC_FLUSH_FULL == flush)) { + /* Update the request type for following requests */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + } else { + eop = ICP_QAT_FW_COMP_NOT_EOP; + } + } else { + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Reinitialise the cumulative amount of consumed bytes + */ + pSessionDesc->cumulativeConsumedBytes = 0; + } + } + + /* (LW 14 - 15) */ + pCompReqParams = &(pMsg->comp_pars); + dcCompRequestParamsPopulate(pCompReqParams, pCookie); + if (CPA_DC_FLUSH_FINAL == flush) { + bFinal = ICP_QAT_FW_COMP_BFINAL; + } + + switch (cnvMode) { + case DC_CNVNR: + cnvRecovery = ICP_QAT_FW_COMP_CNV_RECOVERY; + /* Fall through is intended here, because for CNVNR + * cnvDecompReq also needs to be set */ + case DC_CNV: + cnvDecompReq = ICP_QAT_FW_COMP_CNV; + break; + case DC_NO_CNV: + cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + cnvRecovery = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + break; + } + + /* LW 18 */ + rpCmdFlags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + sop, eop, bFinal, cnvDecompReq, cnvRecovery, crcMode); + pMsg->comp_pars.req_par_flags = rpCmdFlags; + + /* Populates the QAT common request middle part of the message + * (LW 6 to 11) */ + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)pMsg, + pCookie, + DC_DEFAULT_QAT_PTR_TYPE, + srcAddrPhys, + dstAddrPhys, + 0, + 0); + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Send a compression request to QAT + * + * @description + * Send the requests for compression or decompression to QAT + * + * @param{in] pCookie Pointer to the compression cookie + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcSendRequest(dc_compression_cookie_t *pCookie, + sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + dc_request_dir_t compDecomp) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + /* Send to QAT */ + status = icp_adf_transPutMsg(pService->trans_handle_compression_tx, + (void *)&(pCookie->request), + LAC_QAT_DC_REQ_SZ_LW); + + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_STATUS_RETRY == status)) { + /* reset requestType after receiving an retry on + * the stateful request */ + pSessionDesc->requestType = pSessionDesc->previousRequestType; + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Process the synchronous and asynchronous case for compression or + * decompression + * + * @description + * Process the synchronous and asynchronous case for compression or + * decompression. This function will then create and send the request to + * the firmware. + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in] pSessionHandle Session handle + * @param[in] numRequests Number of operations in the batch request + * @param[in] pBatchOpData Address of the list of jobs to be processed + * @param[in] pSrcBuff Pointer to data buffer for compression + * @param[in] pDestBuff Pointer to buffer space for data after + * compression + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] pOpData Pointer to request information structure + * holding parameters for cpaDcCompress2 and + * CpaDcDecompressData2 + * @param[in] callbackTag Pointer to the callback tag + * @param[in] compDecomp Direction of the operation + * @param[in] isAsyncMode Used to know if synchronous or asynchronous + * mode + * @param[in] cnvMode CNV Mode + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_RETRY Retry operation + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_RESOURCE Resource error + * + *****************************************************************************/ +static CpaStatus +dcCompDecompData(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + CpaDcOpData *pOpData, + void *callbackTag, + dc_request_dir_t compDecomp, + CpaBoolean isAsyncMode, + dc_cnv_mode_t cnvMode) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + dc_compression_cookie_t *pCookie = NULL; + + if ((LacSync_GenWakeupSyncCaller == pSessionDesc->pCompressionCb) && + isAsyncMode == CPA_TRUE) { + lac_sync_op_data_t *pSyncCallbackData = NULL; + + status = LacSync_CreateSyncCookie(&pSyncCallbackData); + + if (CPA_STATUS_SUCCESS == status) { + status = dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + pOpData, + pSyncCallbackData, + compDecomp, + CPA_FALSE, + cnvMode); + } else { + return status; + } + + if (CPA_STATUS_SUCCESS == status) { + CpaStatus syncStatus = CPA_STATUS_SUCCESS; + + syncStatus = + LacSync_WaitForCallback(pSyncCallbackData, + DC_SYNC_CALLBACK_TIMEOUT, + &status, + NULL); + + /* If callback doesn't come back */ + if (CPA_STATUS_SUCCESS != syncStatus) { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC( + numCompCompletedErrors, pService); + } else { + COMPRESSION_STAT_INC( + numDecompCompletedErrors, pService); + } + LAC_LOG_ERROR("Callback timed out"); + status = syncStatus; + } + } else { + /* As the Request was not sent the Callback will never + * be called, so need to indicate that we're finished + * with cookie so it can be destroyed. */ + LacSync_SetSyncCookieComplete(pSyncCallbackData); + } + + LacSync_DestroySyncCookie(&pSyncCallbackData); + return status; + } + + /* Allocate the compression cookie + * The memory is freed in callback or in sendRequest if an error occurs + */ + pCookie = (dc_compression_cookie_t *)Lac_MemPoolEntryAlloc( + pService->compression_mem_pool); + if (NULL == pCookie) { + LAC_LOG_ERROR("Cannot get mem pool entry for compression"); + status = CPA_STATUS_RESOURCE; + } else if ((void *)CPA_STATUS_RETRY == pCookie) { + pCookie = NULL; + status = CPA_STATUS_RETRY; + } + + if (CPA_STATUS_SUCCESS == status) { + status = dcCreateRequest(pCookie, + pService, + pSessionDesc, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + pOpData, + callbackTag, + compDecomp, + cnvMode); + } + + if (CPA_STATUS_SUCCESS == status) { + /* Increment number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicInc( + &(pSessionDesc->pendingStatelessCbCount)); + } + status = + dcSendRequest(pCookie, pService, pSessionDesc, compDecomp); + } + + if (CPA_STATUS_SUCCESS == status) { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequests, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequests, pService); + } + } else { + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequestsErrors, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequestsErrors, pService); + } + + /* Decrement number of pending callbacks for session */ + if (CPA_DC_STATELESS == pSessionDesc->sessState) { + qatUtilsAtomicDec( + &(pSessionDesc->pendingStatelessCbCount)); + } else { + qatUtilsAtomicDec( + &(pSessionDesc->pendingStatefulCbCount)); + } + + /* Free the memory pool */ + if (NULL != pCookie) { + if (status != CPA_STATUS_UNSUPPORTED) { + /* Free the memory pool */ + Lac_MemPoolEntryFree(pCookie); + pCookie = NULL; + } + } + } + + return status; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Handle zero length compression or decompression requests + * + * @description + * Handle zero length compression or decompression requests + * + * @param[in] pService Pointer to the compression service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] pResults Pointer to results structure + * @param[in] flushFlag Indicates the type of flush to be + * performed + * @param[in] callbackTag User supplied value to help correlate + * the callback with its associated request + * @param[in] compDecomp Direction of the operation + * + * @retval CPA_TRUE Zero length SOP or MOP processed + * @retval CPA_FALSE Zero length EOP + * + *****************************************************************************/ +static CpaStatus +dcZeroLengthRequests(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag, + dc_request_dir_t compDecomp) +{ + CpaBoolean status = CPA_FALSE; + CpaDcCallbackFn pCbFunc = pSessionDesc->pCompressionCb; + + if (DC_REQUEST_FIRST == pSessionDesc->requestType) { + /* Reinitialise the cumulative amount of consumed bytes */ + pSessionDesc->cumulativeConsumedBytes = 0; + + /* Zero length SOP */ + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pResults->checksum = 1; + } else { + pResults->checksum = 0; + } + + status = CPA_TRUE; + } else if ((CPA_DC_FLUSH_NONE == flushFlag) || + (CPA_DC_FLUSH_SYNC == flushFlag)) { + /* Zero length MOP */ + pResults->checksum = pSessionDesc->previousChecksum; + status = CPA_TRUE; + } + + if (CPA_TRUE == status) { + pResults->status = CPA_DC_OK; + pResults->produced = 0; + pResults->consumed = 0; + + /* Increment statistics */ + if (DC_COMPRESSION_REQUEST == compDecomp) { + COMPRESSION_STAT_INC(numCompRequests, pService); + COMPRESSION_STAT_INC(numCompCompleted, pService); + } else { + COMPRESSION_STAT_INC(numDecompRequests, pService); + COMPRESSION_STAT_INC(numDecompCompleted, pService); + } + + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + + if ((NULL != pCbFunc) && + (LacSync_GenWakeupSyncCaller != pCbFunc)) { + pCbFunc(callbackTag, CPA_STATUS_SUCCESS); + } + + return CPA_TRUE; + } + + return CPA_FALSE; +} + +static CpaStatus +dcParamCheck(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + sal_compression_service_t *pService, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + dc_session_desc_t *pSessionDesc, + CpaDcFlush flushFlag, + Cpa64U srcBuffSize) +{ + + if (dcCheckSourceData(pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + srcBuffSize, + NULL) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (dcCheckDestinationData( + pService, pSessionHandle, pDestBuff, DC_COMPRESSION_REQUEST) != + CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + LAC_INVALID_PARAM_LOG("Invalid sessDirection value"); + return CPA_STATUS_INVALID_PARAM; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcCompressData(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionHandle); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + if (CPA_STATUS_SUCCESS != + dcParamCheck(insHandle, + pSessionHandle, + pService, + pSrcBuff, + pDestBuff, + pResults, + pSessionDesc, + flushFlag, + srcBuffSize)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG( + "Invalid session state, stateful sessions " + "are not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerify feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNVNR_EXTENDED_CAPABILITY)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerifyAndRecovery feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + NULL, + callbackTag, + DC_COMPRESSION_REQUEST, + CPA_TRUE, + DC_CNVNR); +} + +CpaStatus +cpaDcCompressData2(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + dc_cnv_mode_t cnvMode = DC_NO_CNV; + + LAC_CHECK_NULL_PARAM(pOpData); + + if (((CPA_TRUE != pOpData->compressAndVerify) && + (CPA_FALSE != pOpData->compressAndVerify)) || + ((CPA_FALSE != pOpData->compressAndVerifyAndRecover) && + (CPA_TRUE != pOpData->compressAndVerifyAndRecover))) { + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_FALSE == pOpData->compressAndVerify) && + (CPA_TRUE == pOpData->compressAndVerifyAndRecover)) { + return CPA_STATUS_INVALID_PARAM; + } + + + if ((CPA_TRUE == pOpData->compressAndVerify) && + (CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE == pOpData->integrityCrcCheck)) { + return cpaDcCompressData(dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + callbackTag); + } + + if (CPA_FALSE == pOpData->compressAndVerify) { + LAC_INVALID_PARAM_LOG( + "Data compression without verification not allowed"); + return CPA_STATUS_UNSUPPORTED; + } + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pOpData); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_TRUE == pOpData->compressAndVerify && + CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG( + "Invalid session state, stateful sessions " + "not supported with CNV"); + return CPA_STATUS_UNSUPPORTED; + } + + if (!(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY) && + (CPA_TRUE == pOpData->compressAndVerify)) { + LAC_INVALID_PARAM_LOG( + "CompressAndVerify feature not supported"); + return CPA_STATUS_UNSUPPORTED; + } + + if (CPA_STATUS_SUCCESS != + dcParamCheck(insHandle, + pSessionHandle, + pService, + pSrcBuff, + pDestBuff, + pResults, + pSessionDesc, + pOpData->flushFlag, + srcBuffSize)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_STATUS_SUCCESS != dcCheckOpData(pService, pOpData)) { + return CPA_STATUS_INVALID_PARAM; + } + if (CPA_TRUE != pOpData->compressAndVerify) { + if (srcBuffSize > DC_COMP_MAX_BUFF_SIZE) { + LAC_LOG_ERROR( + "Compression payload greater than 64KB is " + "unsupported, when CnV is disabled\n"); + return CPA_STATUS_UNSUPPORTED; + } + } + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + /* Lock the session to check if there are in-flight stateful + * requests */ + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + + /* Check if there is already one in-flight stateful request */ + if (0 != + qatUtilsAtomicGet( + &(pSessionDesc->pendingStatefulCbCount))) { + LAC_LOG_ERROR( + "Only one in-flight stateful request supported"); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + return CPA_STATUS_RETRY; + } + + if (0 == srcBuffSize) { + if (CPA_TRUE == + dcZeroLengthRequests(pService, + pSessionDesc, + pResults, + pOpData->flushFlag, + callbackTag, + DC_COMPRESSION_REQUEST)) { + return CPA_STATUS_SUCCESS; + } + } + + qatUtilsAtomicInc(&(pSessionDesc->pendingStatefulCbCount)); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + } + + if (CPA_TRUE == pOpData->compressAndVerify) { + cnvMode = DC_CNV; + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + pOpData, + callbackTag, + DC_COMPRESSION_REQUEST, + CPA_TRUE, + cnvMode); +} + +static CpaStatus +dcDecompressDataCheck(CpaInstanceHandle insHandle, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + Cpa64U *srcBufferSize) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U srcBuffSize = 0; + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + /* This check is outside the parameter checking as it is needed to + * manage zero length requests */ + if (LacBuffDesc_BufferListVerifyNull(pSrcBuff, + &srcBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != + CPA_STATUS_SUCCESS) { + LAC_INVALID_PARAM_LOG("Invalid source buffer list parameter"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + if (dcCheckSourceData(pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + srcBuffSize, + NULL) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + if (dcCheckDestinationData(pService, + pSessionHandle, + pDestBuff, + DC_DECOMPRESSION_REQUEST) != + CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) { + LAC_INVALID_PARAM_LOG("Invalid sessDirection value"); + return CPA_STATUS_INVALID_PARAM; + } + + + *srcBufferSize = srcBuffSize; + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDecompressData(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcRqResults *pResults, + CpaDcFlush flushFlag, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + Cpa64U srcBuffSize = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + status = dcDecompressDataCheck(insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + &srcBuffSize); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + pService = (sal_compression_service_t *)insHandle; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + /* Lock the session to check if there are in-flight stateful + * requests */ + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot lock session lock"); + return CPA_STATUS_RESOURCE; + } + + /* Check if there is already one in-flight stateful request */ + if (0 != + qatUtilsAtomicGet( + &(pSessionDesc->pendingStatefulCbCount))) { + LAC_LOG_ERROR( + "Only one in-flight stateful request supported"); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + return CPA_STATUS_RETRY; + } + + if ((0 == srcBuffSize) || + ((1 == srcBuffSize) && (CPA_DC_FLUSH_FINAL != flushFlag) && + (CPA_DC_FLUSH_FULL != flushFlag))) { + if (CPA_TRUE == + dcZeroLengthRequests(pService, + pSessionDesc, + pResults, + flushFlag, + callbackTag, + DC_DECOMPRESSION_REQUEST)) { + return CPA_STATUS_SUCCESS; + } + } + + qatUtilsAtomicInc(&(pSessionDesc->pendingStatefulCbCount)); + if (CPA_STATUS_SUCCESS != + LAC_SPINUNLOCK(&(pSessionDesc->sessionLock))) { + LAC_LOG_ERROR("Cannot unlock session lock"); + } + } + + return dcCompDecompData(pService, + pSessionDesc, + dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + flushFlag, + NULL, + callbackTag, + DC_DECOMPRESSION_REQUEST, + CPA_TRUE, + DC_NO_CNV); +} + +CpaStatus +cpaDcDecompressData2(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaBufferList *pSrcBuff, + CpaBufferList *pDestBuff, + CpaDcOpData *pOpData, + CpaDcRqResults *pResults, + void *callbackTag) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaInstanceHandle insHandle = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + Cpa64U srcBuffSize = 0; + LAC_CHECK_NULL_PARAM(pOpData); + + if (CPA_FALSE == pOpData->integrityCrcCheck) { + + return cpaDcDecompressData(dcInstance, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + callbackTag); + } + + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + status = dcDecompressDataCheck(insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + &srcBuffSize); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + pService = (sal_compression_service_t *)insHandle; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_INVALID_PARAM_LOG("Invalid session: Stateful session is " + "not supported"); + return CPA_STATUS_INVALID_PARAM; + } + + return dcCompDecompData(pService, + pSessionDesc, + insHandle, + pSessionHandle, + pSrcBuff, + pDestBuff, + pResults, + pOpData->flushFlag, + pOpData, + callbackTag, + DC_DECOMPRESSION_REQUEST, + CPA_TRUE, + DC_NO_CNV); +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_dp.c b/sys/dev/qat/qat_api/common/compression/dc_dp.c new file mode 100644 index 000000000000..4a24bf17dc32 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_dp.c @@ -0,0 +1,545 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_dp.c + * + * @defgroup cpaDcDp Data Compression Data Plane API + * + * @ingroup cpaDcDp + * + * @description + * Implementation of the Data Compression DP operations. + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_dc.h" +#include "cpa_dc_dp.h" + +#include "icp_qat_fw_comp.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_common.h" +#include "lac_mem.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "lac_sal.h" +#include "lac_sync.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" +#include "icp_sal_poll.h" + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * Check that pOpData is valid + * + * @description + * Check that all the parameters defined in the pOpData are valid + * + * @param[in] pOpData Pointer to a structure containing the + * request parameters + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * + *****************************************************************************/ +static CpaStatus +dcDataPlaneParamCheck(const CpaDcDpOpData *pOpData) +{ + sal_compression_service_t *pService = NULL; + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pOpData->dcInstance); + LAC_CHECK_NULL_PARAM(pOpData->pSessionHandle); + + /* Ensure this is a compression instance */ + SAL_CHECK_INSTANCE_TYPE(pOpData->dcInstance, + SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)(pOpData->dcInstance); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_FALSE == pSessionDesc->isDcDp) { + QAT_UTILS_LOG("The session type should be data plane.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Compressing zero byte is not supported */ + if ((CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) && + (0 == pOpData->bufferLenToCompress)) { + QAT_UTILS_LOG( + "The source buffer length to compress needs to be greater than zero byte.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->sessDirection > CPA_DC_DIR_DECOMPRESS) { + QAT_UTILS_LOG("Invalid direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (0 == pOpData->srcBuffer) { + QAT_UTILS_LOG("Invalid srcBuffer\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pOpData->destBuffer) { + QAT_UTILS_LOG("Invalid destBuffer\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (pOpData->srcBuffer == pOpData->destBuffer) { + QAT_UTILS_LOG("In place operation is not supported.\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (0 == pOpData->thisPhys) { + QAT_UTILS_LOG("Invalid thisPhys\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_TRUE != pOpData->compressAndVerify) && + (CPA_FALSE != pOpData->compressAndVerify)) { + QAT_UTILS_LOG("Invalid compressAndVerify\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerify) && + !(pService->generic_service_info.dcExtendedFeatures & + DC_CNV_EXTENDED_CAPABILITY)) { + QAT_UTILS_LOG("Invalid compressAndVerify, no CNV capability\n"); + return CPA_STATUS_UNSUPPORTED; + } + if ((CPA_TRUE != pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE != pOpData->compressAndVerifyAndRecover)) { + QAT_UTILS_LOG("Invalid compressAndVerifyAndRecover\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + (CPA_FALSE == pOpData->compressAndVerify)) { + QAT_UTILS_LOG("CnVnR option set without setting CnV\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_TRUE == pOpData->compressAndVerifyAndRecover) && + !(pService->generic_service_info.dcExtendedFeatures & + DC_CNVNR_EXTENDED_CAPABILITY)) { + QAT_UTILS_LOG( + "Invalid CnVnR option set and no CnVnR capability.\n"); + return CPA_STATUS_UNSUPPORTED; + } + + if ((CPA_DP_BUFLIST == pOpData->srcBufferLen) && + (CPA_DP_BUFLIST != pOpData->destBufferLen)) { + QAT_UTILS_LOG( + "The source and destination buffers need to be of the same type (both flat buffers or buffer lists).\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((CPA_DP_BUFLIST != pOpData->srcBufferLen) && + (CPA_DP_BUFLIST == pOpData->destBufferLen)) { + QAT_UTILS_LOG( + "The source and destination buffers need to be of the same type (both flat buffers or buffer lists).\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DP_BUFLIST != pOpData->srcBufferLen) { + if (pOpData->srcBufferLen < pOpData->bufferLenToCompress) { + QAT_UTILS_LOG( + "srcBufferLen is smaller than bufferLenToCompress.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData->destBufferLen < pOpData->bufferLenForData) { + QAT_UTILS_LOG( + "destBufferLen is smaller than bufferLenForData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* We are assuming that there is enough memory in the source and + * destination buffer lists. We only receive physical addresses + * of the + * buffers so we are unable to test it here */ + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->srcBuffer); + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->destBuffer); + } + + LAC_CHECK_8_BYTE_ALIGNMENT(pOpData->thisPhys); + + if ((CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection) || + (CPA_DC_DIR_COMBINED == pSessionDesc->sessDirection)) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + /* Check if Intermediate Buffer Array pointer is NULL */ + if ((0 == pService->pInterBuffPtrsArrayPhyAddr) || + (NULL == pService->pInterBuffPtrsArray)) { + QAT_UTILS_LOG( + "No intermediate buffer defined for this instance - see cpaDcStartInstance.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the destination buffer length for data is + * greater + * or equal to 128B */ + if (pOpData->bufferLenForData < + DC_DEST_BUFFER_DYN_MIN_SIZE) { + QAT_UTILS_LOG( + "Destination buffer length for data should be greater or equal to 128B.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } else { + /* Ensure that the destination buffer length for data is + * greater + * or equal to min output buffsize */ + if (pOpData->bufferLenForData < + pService->comp_device_data.minOutputBuffSize) { + QAT_UTILS_LOG( + "Destination buffer size should be greater or equal to %d bytes.\n", + pService->comp_device_data + .minOutputBuffSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDpGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize) +{ + return dcGetSessionSize(dcInstance, pSessionData, pSessionSize, NULL); +} + +CpaStatus +cpaDcDpInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + dc_session_desc_t *pSessionDesc = NULL; + sal_compression_service_t *pService = NULL; + + LAC_CHECK_INSTANCE_HANDLE(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)dcInstance; + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + /* Stateful is not supported */ + if (CPA_DC_STATELESS != pSessionData->sessState) { + QAT_UTILS_LOG("Invalid sessState value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + status = + dcInitSession(dcInstance, pSessionHandle, pSessionData, NULL, NULL); + if (CPA_STATUS_SUCCESS == status) { + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + pSessionDesc->isDcDp = CPA_TRUE; + + ICP_QAT_FW_COMN_PTR_TYPE_SET( + pSessionDesc->reqCacheDecomp.comn_hdr.comn_req_flags, + DC_DP_QAT_PTR_TYPE); + ICP_QAT_FW_COMN_PTR_TYPE_SET( + pSessionDesc->reqCacheComp.comn_hdr.comn_req_flags, + DC_DP_QAT_PTR_TYPE); + } + + return status; +} + +CpaStatus +cpaDcDpRemoveSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + return cpaDcRemoveSession(dcInstance, pSessionHandle); +} + +CpaStatus +cpaDcDpRegCbFunc(const CpaInstanceHandle dcInstance, + const CpaDcDpCallbackFn pNewCb) +{ + sal_compression_service_t *pService = NULL; + + LAC_CHECK_NULL_PARAM(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + LAC_CHECK_NULL_PARAM(pNewCb); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + pService = (sal_compression_service_t *)dcInstance; + pService->pDcDpCb = pNewCb; + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup cpaDcDp + * + * @description + * Writes the message to the ring + * + * @param[in] pOpData Pointer to a structure containing the + * request parameters + * @param[in] pCurrentQatMsg Pointer to current QAT message on the ring + * + *****************************************************************************/ +static void +dcDpWriteRingMsg(CpaDcDpOpData *pOpData, icp_qat_fw_comp_req_t *pCurrentQatMsg) +{ + icp_qat_fw_comp_req_t *pReqCache = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa8U bufferFormat; + + Cpa8U cnvDecompReq = ICP_QAT_FW_COMP_NO_CNV; + Cpa8U cnvnrCompReq = ICP_QAT_FW_COMP_NO_CNV_RECOVERY; + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + + if (CPA_DC_DIR_COMPRESS == pOpData->sessDirection) { + pReqCache = &(pSessionDesc->reqCacheComp); + /* CNV check */ + if (CPA_TRUE == pOpData->compressAndVerify) { + cnvDecompReq = ICP_QAT_FW_COMP_CNV; + /* CNVNR check */ + if (CPA_TRUE == pOpData->compressAndVerifyAndRecover) { + cnvnrCompReq = ICP_QAT_FW_COMP_CNV_RECOVERY; + } + } + } else { + pReqCache = &(pSessionDesc->reqCacheDecomp); + } + + /* Fills in the template DC ET ring message - cached from the + * session descriptor */ + memcpy((void *)pCurrentQatMsg, + (void *)(pReqCache), + (LAC_QAT_DC_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES)); + + if (CPA_DP_BUFLIST == pOpData->srcBufferLen) { + bufferFormat = QAT_COMN_PTR_TYPE_SGL; + } else { + bufferFormat = QAT_COMN_PTR_TYPE_FLAT; + } + + pCurrentQatMsg->comp_pars.req_par_flags |= + ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + 0, 0, 0, cnvDecompReq, cnvnrCompReq, 0); + + SalQatMsg_CmnMidWrite((icp_qat_fw_la_bulk_req_t *)pCurrentQatMsg, + pOpData, + bufferFormat, + pOpData->srcBuffer, + pOpData->destBuffer, + pOpData->srcBufferLen, + pOpData->destBufferLen); + + pCurrentQatMsg->comp_pars.comp_len = pOpData->bufferLenToCompress; + pCurrentQatMsg->comp_pars.out_buffer_sz = pOpData->bufferLenForData; +} + +CpaStatus +cpaDcDpEnqueueOp(CpaDcDpOpData *pOpData, const CpaBoolean performOpNow) +{ + icp_qat_fw_comp_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaStatus status = CPA_STATUS_SUCCESS; + + status = dcDataPlaneParamCheck(pOpData); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + if ((CPA_FALSE == pOpData->compressAndVerify) && + (CPA_DC_DIR_COMPRESS == pOpData->sessDirection)) { + return CPA_STATUS_UNSUPPORTED; + } + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pOpData->dcInstance); + + trans_handle = ((sal_compression_service_t *)pOpData->dcInstance) + ->trans_handle_compression_tx; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData->pSessionHandle); + + if ((CPA_DC_DIR_COMPRESS == pOpData->sessDirection) && + (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } else if ((CPA_DC_DIR_DECOMPRESS == pOpData->sessDirection) && + (CPA_DC_DIR_COMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + icp_adf_getSingleQueueAddr(trans_handle, (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + return CPA_STATUS_RETRY; + } + + dcDpWriteRingMsg(pOpData, pCurrentQatMsg); + pSessionDesc->pendingDpStatelessCbCount++; + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcDpEnqueueOpBatch(const Cpa32U numberRequests, + CpaDcDpOpData *pOpData[], + const CpaBoolean performOpNow) +{ + icp_qat_fw_comp_req_t *pCurrentQatMsg = NULL; + icp_comms_trans_handle trans_handle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa32U i = 0; + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + + LAC_CHECK_NULL_PARAM(pOpData); + LAC_CHECK_NULL_PARAM(pOpData[0]); + LAC_CHECK_NULL_PARAM(pOpData[0]->dcInstance); + + pService = (sal_compression_service_t *)(pOpData[0]->dcInstance); + if ((numberRequests == 0) || + (numberRequests > pService->maxNumCompConcurrentReq)) { + QAT_UTILS_LOG( + "The number of requests needs to be between 1 and %d.\n", + pService->maxNumCompConcurrentReq); + return CPA_STATUS_INVALID_PARAM; + } + + for (i = 0; i < numberRequests; i++) { + status = dcDataPlaneParamCheck(pOpData[i]); + if (CPA_STATUS_SUCCESS != status) { + return status; + } + + /* Check that all instance handles and session handles are the + * same */ + if (pOpData[i]->dcInstance != pOpData[0]->dcInstance) { + QAT_UTILS_LOG( + "All instance handles should be the same in the pOpData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (pOpData[i]->pSessionHandle != pOpData[0]->pSessionHandle) { + QAT_UTILS_LOG( + "All session handles should be the same in the pOpData.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } + + for (i = 0; i < numberRequests; i++) { + if ((CPA_FALSE == pOpData[i]->compressAndVerify) && + (CPA_DC_DIR_COMPRESS == pOpData[i]->sessDirection)) { + return CPA_STATUS_UNSUPPORTED; + } + } + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pOpData[0]->dcInstance); + + trans_handle = ((sal_compression_service_t *)pOpData[0]->dcInstance) + ->trans_handle_compression_tx; + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pOpData[0]->pSessionHandle); + + for (i = 0; i < numberRequests; i++) { + if ((CPA_DC_DIR_COMPRESS == pOpData[i]->sessDirection) && + (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } else if ((CPA_DC_DIR_DECOMPRESS == + pOpData[i]->sessDirection) && + (CPA_DC_DIR_COMPRESS == + pSessionDesc->sessDirection)) { + QAT_UTILS_LOG( + "The session does not support this direction of operation.\n"); + return CPA_STATUS_INVALID_PARAM; + } + } + + icp_adf_getQueueMemory(trans_handle, + numberRequests, + (void **)&pCurrentQatMsg); + if (NULL == pCurrentQatMsg) { + return CPA_STATUS_RETRY; + } + + for (i = 0; i < numberRequests; i++) { + dcDpWriteRingMsg(pOpData[i], pCurrentQatMsg); + icp_adf_getQueueNext(trans_handle, (void **)&pCurrentQatMsg); + } + + pSessionDesc->pendingDpStatelessCbCount += numberRequests; + + if (CPA_TRUE == performOpNow) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +icp_sal_DcPollDpInstance(CpaInstanceHandle dcInstance, Cpa32U responseQuota) +{ + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_INSTANCE_HANDLE(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_rx; + + return icp_adf_pollQueue(trans_handle, responseQuota); +} + +CpaStatus +cpaDcDpPerformOpNow(CpaInstanceHandle dcInstance) +{ + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_NULL_PARAM(dcInstance); + SAL_CHECK_INSTANCE_TYPE(dcInstance, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(dcInstance); + + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + SalQatMsg_updateQueueTail(trans_handle); + } + + return CPA_STATUS_SUCCESS; +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_header_footer.c b/sys/dev/qat/qat_api/common/compression/dc_header_footer.c new file mode 100644 index 000000000000..4a92e20ba0f4 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_header_footer.c @@ -0,0 +1,237 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_header_footer.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression header and footer operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" +#include "icp_adf_init.h" + +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "dc_header_footer.h" +#include "dc_session.h" +#include "dc_datapath.h" + +CpaStatus +cpaDcGenerateHeader(CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, + Cpa32U *count) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pDestBuff->pData); + LAC_CHECK_NULL_PARAM(count); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + QAT_UTILS_LOG("Invalid session direction\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + /* Adding a Gzip header */ + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + + if (pDestBuff->dataLenInBytes < DC_GZIP_HEADER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + pDest[0] = DC_GZIP_ID1; /* ID1 */ + pDest[1] = DC_GZIP_ID2; /* ID2 */ + pDest[2] = + 0x08; /* CM = 8 denotes "deflate" compression */ + pDest[3] = 0x00; /* FLG = 0 denotes "No extra fields" */ + pDest[4] = 0x00; + pDest[5] = 0x00; + pDest[6] = 0x00; + pDest[7] = 0x00; /* MTIME = 0x00 means time stamp not + available */ + + /* XFL = 4 - compressor used fastest compression, */ + /* XFL = 2 - compressor used maximum compression. */ + pDest[8] = 0; + if (CPA_DC_L1 == pSessionDesc->compLevel) + pDest[8] = DC_GZIP_FAST_COMP; + else if (CPA_DC_L4 >= pSessionDesc->compLevel) + pDest[8] = DC_GZIP_MAX_COMP; + + pDest[9] = + DC_GZIP_FILESYSTYPE; /* OS = 0 means FAT filesystem + (MS-DOS, OS/2, NT/Win32), 3 - Unix */ + + /* Set to the number of bytes added to the buffer */ + *count = DC_GZIP_HEADER_SIZE; + } + + /* Adding a Zlib header */ + else if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa16U header = 0, level = 0; + + if (pDestBuff->dataLenInBytes < DC_ZLIB_HEADER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* CMF = CM | CMINFO. + CM = 8 denotes "deflate" compression, + CMINFO = 7 indicates a 32K window size */ + /* Depending on the device, at compression levels above + L1, the + window size can be 8 or 16K bytes. + The file will decompress ok if a greater window size + is specified + in the header. */ + header = + (DC_ZLIB_CM_DEFLATE + + (DC_32K_WINDOW_SIZE << DC_ZLIB_WINDOWSIZE_OFFSET)) + << LAC_NUM_BITS_IN_BYTE; + + switch (pSessionDesc->compLevel) { + case CPA_DC_L1: + level = DC_ZLIB_LEVEL_0; + break; + case CPA_DC_L2: + level = DC_ZLIB_LEVEL_1; + break; + case CPA_DC_L3: + level = DC_ZLIB_LEVEL_2; + break; + default: + level = DC_ZLIB_LEVEL_3; + } + + /* Bits 6 - 7: FLEVEL, compression level */ + header |= level << DC_ZLIB_FLEVEL_OFFSET; + + /* The header has to be a multiple of 31 */ + header += DC_ZLIB_HEADER_OFFSET - + (header % DC_ZLIB_HEADER_OFFSET); + + pDest[0] = (Cpa8U)(header >> LAC_NUM_BITS_IN_BYTE); + pDest[1] = (Cpa8U)header; + + /* Set to the number of bytes added to the buffer */ + *count = DC_ZLIB_HEADER_SIZE; + } + + /* If deflate but no checksum required */ + else { + *count = 0; + } + } else { + /* There is no header for other compressed data */ + *count = 0; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcGenerateFooter(CpaDcSessionHandle pSessionHandle, + CpaFlatBuffer *pDestBuff, + CpaDcRqResults *pRes) +{ + dc_session_desc_t *pSessionDesc = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pDestBuff); + LAC_CHECK_NULL_PARAM(pDestBuff->pData); + LAC_CHECK_NULL_PARAM(pRes); + + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + + if (NULL == pSessionDesc) { + QAT_UTILS_LOG("Session handle not as expected\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DIR_DECOMPRESS == pSessionDesc->sessDirection) { + QAT_UTILS_LOG("Invalid session direction\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + if (CPA_DC_CRC32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa32U crc32 = pRes->checksum; + Cpa64U totalLenBeforeCompress = + pSessionDesc->cumulativeConsumedBytes; + + if (pDestBuff->dataLenInBytes < DC_GZIP_FOOTER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Crc32 of the uncompressed data */ + pDest[0] = (Cpa8U)crc32; + pDest[1] = (Cpa8U)(crc32 >> LAC_NUM_BITS_IN_BYTE); + pDest[2] = (Cpa8U)(crc32 >> 2 * LAC_NUM_BITS_IN_BYTE); + pDest[3] = (Cpa8U)(crc32 >> 3 * LAC_NUM_BITS_IN_BYTE); + + /* Length of the uncompressed data */ + pDest[4] = (Cpa8U)totalLenBeforeCompress; + pDest[5] = (Cpa8U)(totalLenBeforeCompress >> + LAC_NUM_BITS_IN_BYTE); + pDest[6] = (Cpa8U)(totalLenBeforeCompress >> + 2 * LAC_NUM_BITS_IN_BYTE); + pDest[7] = (Cpa8U)(totalLenBeforeCompress >> + 3 * LAC_NUM_BITS_IN_BYTE); + + /* Increment produced by the number of bytes added to + * the buffer */ + pRes->produced += DC_GZIP_FOOTER_SIZE; + } else if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + Cpa8U *pDest = pDestBuff->pData; + Cpa32U adler32 = pRes->checksum; + + if (pDestBuff->dataLenInBytes < DC_ZLIB_FOOTER_SIZE) { + QAT_UTILS_LOG( + "The dataLenInBytes of the dest buffer is too small.\n"); + return CPA_STATUS_INVALID_PARAM; + } + + /* Adler32 of the uncompressed data */ + pDest[0] = (Cpa8U)(adler32 >> 3 * LAC_NUM_BITS_IN_BYTE); + pDest[1] = (Cpa8U)(adler32 >> 2 * LAC_NUM_BITS_IN_BYTE); + pDest[2] = (Cpa8U)(adler32 >> LAC_NUM_BITS_IN_BYTE); + pDest[3] = (Cpa8U)adler32; + + /* Increment produced by the number of bytes added to + * the buffer */ + pRes->produced += DC_ZLIB_FOOTER_SIZE; + } + } + + return CPA_STATUS_SUCCESS; +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_session.c b/sys/dev/qat/qat_api/common/compression/dc_session.c new file mode 100644 index 000000000000..1d742e227a10 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_session.c @@ -0,0 +1,957 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_session.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression session operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" + +#include "icp_qat_fw.h" +#include "icp_qat_fw_comp.h" +#include "icp_qat_hw.h" + +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_mem_pools.h" +#include "sal_types_compression.h" +#include "lac_buffer_desc.h" +#include "sal_service_state.h" +#include "sal_qat_cmn_msg.h" + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Check that pSessionData is valid + * + * @description + * Check that all the parameters defined in the pSessionData are valid + * + * @param[in] pSessionData Pointer to a user instantiated structure + * containing session data + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed to find device + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * @retval CPA_STATUS_UNSUPPORTED Unsupported algorithm/feature + * + *****************************************************************************/ +static CpaStatus +dcCheckSessionData(const CpaDcSessionSetupData *pSessionData, + CpaInstanceHandle dcInstance) +{ + CpaDcInstanceCapabilities instanceCapabilities = { 0 }; + + cpaDcQueryCapabilities(dcInstance, &instanceCapabilities); + + if ((pSessionData->compLevel < CPA_DC_L1) || + (pSessionData->compLevel > CPA_DC_L9)) { + QAT_UTILS_LOG("Invalid compLevel value\n"); + return CPA_STATUS_INVALID_PARAM; + } + if ((pSessionData->autoSelectBestHuffmanTree < CPA_DC_ASB_DISABLED) || + (pSessionData->autoSelectBestHuffmanTree > + CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS)) { + QAT_UTILS_LOG("Invalid autoSelectBestHuffmanTree value\n"); + return CPA_STATUS_INVALID_PARAM; + } + if (pSessionData->compType != CPA_DC_DEFLATE) { + QAT_UTILS_LOG("Invalid compType value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->huffType < CPA_DC_HT_STATIC) || + (pSessionData->huffType > CPA_DC_HT_FULL_DYNAMIC) || + (CPA_DC_HT_PRECOMP == pSessionData->huffType)) { + QAT_UTILS_LOG("Invalid huffType value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->sessDirection < CPA_DC_DIR_COMPRESS) || + (pSessionData->sessDirection > CPA_DC_DIR_COMBINED)) { + QAT_UTILS_LOG("Invalid sessDirection value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->sessState < CPA_DC_STATEFUL) || + (pSessionData->sessState > CPA_DC_STATELESS)) { + QAT_UTILS_LOG("Invalid sessState value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + if ((pSessionData->checksum < CPA_DC_NONE) || + (pSessionData->checksum > CPA_DC_ADLER32)) { + QAT_UTILS_LOG("Invalid checksum value\n"); + return CPA_STATUS_INVALID_PARAM; + } + + return CPA_STATUS_SUCCESS; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression hardware block + * + * @description + * This function will populate the compression hardware block and update + * the size in bytes of the block + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] pCompConfig Pointer to slice config word + * @param[in] compDecomp Direction of the operation + * @param[in] enableDmm Delayed Match Mode + * + *****************************************************************************/ +static void +dcCompHwBlockPopulate(dc_session_desc_t *pSessionDesc, + icp_qat_hw_compression_config_t *pCompConfig, + dc_request_dir_t compDecomp, + icp_qat_hw_compression_delayed_match_t enableDmm) +{ + icp_qat_hw_compression_direction_t dir = + ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + icp_qat_hw_compression_algo_t algo = + ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; + icp_qat_hw_compression_depth_t depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + icp_qat_hw_compression_file_type_t filetype = + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0; + + /* Set the direction */ + if (DC_COMPRESSION_REQUEST == compDecomp) { + dir = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + } else { + dir = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS; + } + + if (CPA_DC_DEFLATE == pSessionDesc->compType) { + algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; + } else { + QAT_UTILS_LOG("Algorithm not supported for Compression\n"); + } + + /* Set the depth */ + if (DC_DECOMPRESSION_REQUEST == compDecomp) { + depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + } else { + switch (pSessionDesc->compLevel) { + case CPA_DC_L1: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_1; + break; + case CPA_DC_L2: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_4; + break; + case CPA_DC_L3: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_8; + break; + default: + depth = ICP_QAT_HW_COMPRESSION_DEPTH_16; + } + } + + /* The file type is set to ICP_QAT_HW_COMPRESSION_FILE_TYPE_0. The other + * modes will be used in the future for precompiled huffman trees */ + filetype = ICP_QAT_HW_COMPRESSION_FILE_TYPE_0; + + pCompConfig->val = ICP_QAT_HW_COMPRESSION_CONFIG_BUILD( + dir, enableDmm, algo, depth, filetype); + + pCompConfig->reserved = 0; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the compression content descriptor + * + * @description + * This function will populate the compression content descriptor + * + * @param[in] pService Pointer to the service + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in] contextBufferAddrPhys Physical address of the context buffer + * @param[out] pMsg Pointer to the compression message + * @param[in] nextSlice Next slice + * @param[in] compDecomp Direction of the operation + * + *****************************************************************************/ +static void +dcCompContentDescPopulate(sal_compression_service_t *pService, + dc_session_desc_t *pSessionDesc, + CpaPhysicalAddr contextBufferAddrPhys, + icp_qat_fw_comp_req_t *pMsg, + icp_qat_fw_slice_t nextSlice, + dc_request_dir_t compDecomp) +{ + + icp_qat_fw_comp_cd_hdr_t *pCompControlBlock = NULL; + icp_qat_hw_compression_config_t *pCompConfig = NULL; + CpaBoolean bankEnabled = CPA_FALSE; + + pCompControlBlock = (icp_qat_fw_comp_cd_hdr_t *)&(pMsg->comp_cd_ctrl); + pCompConfig = + (icp_qat_hw_compression_config_t *)(pMsg->cd_pars.sl + .comp_slice_cfg_word); + + ICP_QAT_FW_COMN_NEXT_ID_SET(pCompControlBlock, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(pCompControlBlock, ICP_QAT_FW_SLICE_COMP); + + pCompControlBlock->comp_cfg_offset = 0; + + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_DC_DEFLATE == pSessionDesc->compType) && + (DC_DECOMPRESSION_REQUEST == compDecomp)) { + /* Enable A, B, C, D, and E (CAMs). */ + pCompControlBlock->ram_bank_flags = + ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank E */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank D */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank C */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank B */ + ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */ + bankEnabled = CPA_TRUE; + } else { + /* Disable all banks */ + pCompControlBlock->ram_bank_flags = + ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank E */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank D */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank C */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank B */ + ICP_QAT_FW_COMP_BANK_DISABLED); /* Bank A */ + } + + if (DC_COMPRESSION_REQUEST == compDecomp) { + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + pService->generic_service_info, + pCompControlBlock->comp_state_addr, + pSessionDesc->stateRegistersComp); + } else { + LAC_MEM_SHARED_WRITE_VIRT_TO_PHYS_PTR_EXTERNAL( + pService->generic_service_info, + pCompControlBlock->comp_state_addr, + pSessionDesc->stateRegistersDecomp); + } + + if (CPA_TRUE == bankEnabled) { + pCompControlBlock->ram_banks_addr = contextBufferAddrPhys; + } else { + pCompControlBlock->ram_banks_addr = 0; + } + + pCompControlBlock->resrvd = 0; + + /* Populate Compression Hardware Setup Block */ + dcCompHwBlockPopulate(pSessionDesc, + pCompConfig, + compDecomp, + pService->comp_device_data.enableDmm); +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Populate the translator content descriptor + * + * @description + * This function will populate the translator content descriptor + * + * @param[out] pMsg Pointer to the compression message + * @param[in] nextSlice Next slice + * + *****************************************************************************/ +static void +dcTransContentDescPopulate(icp_qat_fw_comp_req_t *pMsg, + icp_qat_fw_slice_t nextSlice) +{ + + icp_qat_fw_xlt_cd_hdr_t *pTransControlBlock = NULL; + pTransControlBlock = (icp_qat_fw_xlt_cd_hdr_t *)&(pMsg->u2.xlt_cd_ctrl); + + ICP_QAT_FW_COMN_NEXT_ID_SET(pTransControlBlock, nextSlice); + ICP_QAT_FW_COMN_CURR_ID_SET(pTransControlBlock, ICP_QAT_FW_SLICE_XLAT); + + pTransControlBlock->resrvd1 = 0; + pTransControlBlock->resrvd2 = 0; + pTransControlBlock->resrvd3 = 0; +} + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Get the context size and the history size + * + * @description + * This function will get the size of the context buffer and the history + * buffer. The history buffer is a subset of the context buffer and its + * size is needed for stateful compression. + + * @param[in] dcInstance DC Instance Handle + * + * @param[in] pSessionData Pointer to a user instantiated + * structure containing session data + * @param[out] pContextSize Pointer to the context size + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * + * + *****************************************************************************/ +static CpaStatus +dcGetContextSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pContextSize) +{ + sal_compression_service_t *pCompService = NULL; + + pCompService = (sal_compression_service_t *)dcInstance; + + *pContextSize = 0; + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DEFLATE == pSessionData->compType) && + (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection)) { + *pContextSize = + pCompService->comp_device_data.inflateContextSize; + } + return CPA_STATUS_SUCCESS; +} + +CpaStatus +dcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + sal_compression_service_t *pService = NULL; + icp_qat_fw_comp_req_t *pReqCache = NULL; + dc_session_desc_t *pSessionDesc = NULL; + CpaPhysicalAddr contextAddrPhys = 0; + CpaPhysicalAddr physAddress = 0; + CpaPhysicalAddr physAddressAligned = 0; + Cpa32U minContextSize = 0, historySize = 0; + Cpa32U rpCmdFlags = 0; + icp_qat_fw_serv_specif_flags cmdFlags = 0; + Cpa8U secureRam = ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF; + Cpa8U sessType = ICP_QAT_FW_COMP_STATELESS_SESSION; + Cpa8U autoSelectBest = ICP_QAT_FW_COMP_NOT_AUTO_SELECT_BEST; + Cpa8U enhancedAutoSelectBest = ICP_QAT_FW_COMP_NOT_ENH_AUTO_SELECT_BEST; + Cpa8U disableType0EnhancedAutoSelectBest = + ICP_QAT_FW_COMP_NOT_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST; + icp_qat_fw_la_cmd_id_t dcCmdId = + (icp_qat_fw_la_cmd_id_t)ICP_QAT_FW_COMP_CMD_STATIC; + icp_qat_fw_comn_flags cmnRequestFlags = 0; + dc_integrity_crc_fw_t *pDataIntegrityCrcs = NULL; + + cmnRequestFlags = + ICP_QAT_FW_COMN_FLAGS_BUILD(DC_DEFAULT_QAT_PTR_TYPE, + QAT_COMN_CD_FLD_TYPE_16BYTE_DATA); + + pService = (sal_compression_service_t *)dcInstance; + + secureRam = pService->comp_device_data.useDevRam; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionData); + + /* Check that the parameters defined in the pSessionData are valid for + * the + * device */ + if (CPA_STATUS_SUCCESS != + dcCheckSessionData(pSessionData, dcInstance)) { + return CPA_STATUS_INVALID_PARAM; + } + + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection)) { + QAT_UTILS_LOG("Stateful sessions are not supported.\n"); + return CPA_STATUS_UNSUPPORTED; + } + + if (CPA_DC_HT_FULL_DYNAMIC == pSessionData->huffType) { + /* Test if DRAM is available for the intermediate buffers */ + if ((NULL == pService->pInterBuffPtrsArray) && + (0 == pService->pInterBuffPtrsArrayPhyAddr)) { + if (CPA_DC_ASB_STATIC_DYNAMIC == + pSessionData->autoSelectBestHuffmanTree) { + /* Define the Huffman tree as static */ + pSessionData->huffType = CPA_DC_HT_STATIC; + } else { + QAT_UTILS_LOG( + "No buffer defined for this instance - see cpaDcStartInstance.\n"); + return CPA_STATUS_RESOURCE; + } + } + } + + if ((CPA_DC_STATEFUL == pSessionData->sessState) && + (CPA_DC_DEFLATE == pSessionData->compType)) { + /* Get the size of the context buffer */ + status = + dcGetContextSize(dcInstance, pSessionData, &minContextSize); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Unable to get the context size of the session.\n"); + return CPA_STATUS_FAIL; + } + + /* If the minContextSize is zero it means we will not save or + * restore + * any history */ + if (0 != minContextSize) { + Cpa64U contextBuffSize = 0; + + LAC_CHECK_NULL_PARAM(pContextBuffer); + + if (LacBuffDesc_BufferListVerify( + pContextBuffer, + &contextBuffSize, + LAC_NO_ALIGNMENT_SHIFT) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + + /* Ensure that the context buffer size is greater or + * equal + * to minContextSize */ + if (contextBuffSize < minContextSize) { + QAT_UTILS_LOG( + "Context buffer size should be greater or equal to %d.\n", + minContextSize); + return CPA_STATUS_INVALID_PARAM; + } + } + } + + /* Re-align the session structure to 64 byte alignment */ + physAddress = + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + (Cpa8U *)pSessionHandle + + sizeof(void *)); + + if (physAddress == 0) { + QAT_UTILS_LOG( + "Unable to get the physical address of the session.\n"); + return CPA_STATUS_FAIL; + } + + physAddressAligned = + (CpaPhysicalAddr)LAC_ALIGN_POW2_ROUNDUP(physAddress, + LAC_64BYTE_ALIGNMENT); + + pSessionDesc = (dc_session_desc_t *) + /* Move the session pointer by the physical offset + between aligned and unaligned memory */ + ((Cpa8U *)pSessionHandle + sizeof(void *) + + (physAddressAligned - physAddress)); + + /* Save the aligned pointer in the first bytes (size of LAC_ARCH_UINT) + * of the session memory */ + *((LAC_ARCH_UINT *)pSessionHandle) = (LAC_ARCH_UINT)pSessionDesc; + + /* Zero the compression session */ + LAC_OS_BZERO(pSessionDesc, sizeof(dc_session_desc_t)); + + /* Write the buffer descriptor for context/history */ + if (0 != minContextSize) { + status = LacBuffDesc_BufferListDescWrite( + pContextBuffer, + &contextAddrPhys, + CPA_FALSE, + &(pService->generic_service_info)); + + if (status != CPA_STATUS_SUCCESS) { + return status; + } + + pSessionDesc->pContextBuffer = pContextBuffer; + pSessionDesc->historyBuffSize = historySize; + } + + pSessionDesc->cumulativeConsumedBytes = 0; + + /* Initialise pSessionDesc */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + pSessionDesc->huffType = pSessionData->huffType; + pSessionDesc->compType = pSessionData->compType; + pSessionDesc->checksumType = pSessionData->checksum; + pSessionDesc->autoSelectBestHuffmanTree = + pSessionData->autoSelectBestHuffmanTree; + pSessionDesc->sessDirection = pSessionData->sessDirection; + pSessionDesc->sessState = pSessionData->sessState; + pSessionDesc->compLevel = pSessionData->compLevel; + pSessionDesc->isDcDp = CPA_FALSE; + pSessionDesc->minContextSize = minContextSize; + pSessionDesc->isSopForCompressionProcessed = CPA_FALSE; + pSessionDesc->isSopForDecompressionProcessed = CPA_FALSE; + + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + + if (CPA_DC_STATEFUL == pSessionData->sessState) { + /* Init the spinlock used to lock the access to the number of + * stateful + * in-flight requests */ + status = LAC_SPINLOCK_INIT(&(pSessionDesc->sessionLock)); + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Spinlock init failed for sessionLock.\n"); + return CPA_STATUS_RESOURCE; + } + } + + /* For asynchronous - use the user supplied callback + * for synchronous - use the internal synchronous callback */ + pSessionDesc->pCompressionCb = ((void *)NULL != (void *)callbackFn) ? + callbackFn : + LacSync_GenWakeupSyncCaller; + + /* Reset the pending callback counters */ + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatelessCbCount); + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatefulCbCount); + pSessionDesc->pendingDpStatelessCbCount = 0; + + if (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionData->huffType) { + /* Populate the compression section of the content + * descriptor */ + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_XLAT, + DC_COMPRESSION_REQUEST); + + /* Populate the translator section of the content + * descriptor */ + dcTransContentDescPopulate( + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_DRAM_WR); + + if (0 != pService->pInterBuffPtrsArrayPhyAddr) { + pReqCache = &(pSessionDesc->reqCacheComp); + + pReqCache->u1.xlt_pars.inter_buff_ptr = + pService->pInterBuffPtrsArrayPhyAddr; + } + } else { + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheComp), + ICP_QAT_FW_SLICE_DRAM_WR, + DC_COMPRESSION_REQUEST); + } + } + + /* Populate the compression section of the content descriptor for + * the decompression case or combined */ + if (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection) { + dcCompContentDescPopulate(pService, + pSessionDesc, + contextAddrPhys, + &(pSessionDesc->reqCacheDecomp), + ICP_QAT_FW_SLICE_DRAM_WR, + DC_DECOMPRESSION_REQUEST); + } + + if (CPA_DC_STATEFUL == pSessionData->sessState) { + sessType = ICP_QAT_FW_COMP_STATEFUL_SESSION; + + LAC_OS_BZERO(&pSessionDesc->stateRegistersComp, + sizeof(pSessionDesc->stateRegistersComp)); + + LAC_OS_BZERO(&pSessionDesc->stateRegistersDecomp, + sizeof(pSessionDesc->stateRegistersDecomp)); + } + + /* Get physical address of E2E CRC buffer */ + pSessionDesc->physDataIntegrityCrcs = (icp_qat_addr_width_t) + LAC_OS_VIRT_TO_PHYS_EXTERNAL(pService->generic_service_info, + &pSessionDesc->dataIntegrityCrcs); + if (0 == pSessionDesc->physDataIntegrityCrcs) { + QAT_UTILS_LOG( + "Unable to get the physical address of Data Integrity buffer.\n"); + return CPA_STATUS_FAIL; + } + /* Initialize default CRC parameters */ + pDataIntegrityCrcs = &pSessionDesc->dataIntegrityCrcs; + pDataIntegrityCrcs->crc32 = 0; + pDataIntegrityCrcs->adler32 = 1; + pDataIntegrityCrcs->oCrc32Cpr = DC_INVALID_CRC; + pDataIntegrityCrcs->iCrc32Cpr = DC_INVALID_CRC; + pDataIntegrityCrcs->oCrc32Xlt = DC_INVALID_CRC; + pDataIntegrityCrcs->iCrc32Xlt = DC_INVALID_CRC; + pDataIntegrityCrcs->xorFlags = DC_XOR_FLAGS_DEFAULT; + pDataIntegrityCrcs->crcPoly = DC_CRC_POLY_DEFAULT; + pDataIntegrityCrcs->xorOut = DC_XOR_OUT_DEFAULT; + + /* Initialise seed checksums */ + pSessionDesc->seedSwCrc.swCrcI = 0; + pSessionDesc->seedSwCrc.swCrcO = 0; + + /* Populate the cmdFlags */ + switch (pSessionDesc->autoSelectBestHuffmanTree) { + case CPA_DC_ASB_DISABLED: + break; + case CPA_DC_ASB_STATIC_DYNAMIC: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + break; + case CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_STORED_HDRS: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + enhancedAutoSelectBest = ICP_QAT_FW_COMP_ENH_AUTO_SELECT_BEST; + break; + case CPA_DC_ASB_UNCOMP_STATIC_DYNAMIC_WITH_NO_HDRS: + autoSelectBest = ICP_QAT_FW_COMP_AUTO_SELECT_BEST; + enhancedAutoSelectBest = ICP_QAT_FW_COMP_ENH_AUTO_SELECT_BEST; + disableType0EnhancedAutoSelectBest = + ICP_QAT_FW_COMP_DISABLE_TYPE0_ENH_AUTO_SELECT_BEST; + break; + default: + break; + } + + rpCmdFlags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_SOP, + ICP_QAT_FW_COMP_EOP, + ICP_QAT_FW_COMP_BFINAL, + ICP_QAT_FW_COMP_NO_CNV, + ICP_QAT_FW_COMP_NO_CNV_RECOVERY, + ICP_QAT_FW_COMP_CRC_MODE_LEGACY); + + cmdFlags = + ICP_QAT_FW_COMP_FLAGS_BUILD(sessType, + autoSelectBest, + enhancedAutoSelectBest, + disableType0EnhancedAutoSelectBest, + secureRam); + + if (CPA_DC_DIR_DECOMPRESS != pSessionData->sessDirection) { + if (CPA_DC_HT_FULL_DYNAMIC == pSessionDesc->huffType) { + dcCmdId = (icp_qat_fw_la_cmd_id_t)( + ICP_QAT_FW_COMP_CMD_DYNAMIC); + } + + pReqCache = &(pSessionDesc->reqCacheComp); + pReqCache->comp_pars.req_par_flags = rpCmdFlags; + pReqCache->comp_pars.crc.legacy.initial_adler = 1; + pReqCache->comp_pars.crc.legacy.initial_crc32 = 0; + + /* Populate header of the common request message */ + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)pReqCache, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP, + (uint8_t)dcCmdId, + cmnRequestFlags, + cmdFlags); + } + + if (CPA_DC_DIR_COMPRESS != pSessionData->sessDirection) { + dcCmdId = + (icp_qat_fw_la_cmd_id_t)(ICP_QAT_FW_COMP_CMD_DECOMPRESS); + pReqCache = &(pSessionDesc->reqCacheDecomp); + pReqCache->comp_pars.req_par_flags = rpCmdFlags; + pReqCache->comp_pars.crc.legacy.initial_adler = 1; + pReqCache->comp_pars.crc.legacy.initial_crc32 = 0; + + /* Populate header of the common request message */ + SalQatMsg_CmnHdrWrite((icp_qat_fw_comn_req_t *)pReqCache, + ICP_QAT_FW_COMN_REQ_CPM_FW_COMP, + (uint8_t)dcCmdId, + cmnRequestFlags, + cmdFlags); + } + + return status; +} + +CpaStatus +cpaDcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn) +{ + CpaInstanceHandle insHandle = NULL; + sal_compression_service_t *pService = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + LAC_CHECK_INSTANCE_HANDLE(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + pService = (sal_compression_service_t *)insHandle; + + /* Check if SAL is initialised otherwise return an error */ + SAL_RUNNING_CHECK(pService); + + return dcInitSession(insHandle, + pSessionHandle, + pSessionData, + pContextBuffer, + callbackFn); +} + +CpaStatus +cpaDcResetSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U numPendingStateless = 0; + Cpa64U numPendingStateful = 0; + icp_comms_trans_handle trans_handle = NULL; + LAC_CHECK_NULL_PARAM(pSessionHandle); + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionDesc); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + insHandle = dcInstance; + } else { + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + } + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + /* Check if SAL is running otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + if (CPA_TRUE == pSessionDesc->isDcDp) { + trans_handle = ((sal_compression_service_t *)dcInstance) + ->trans_handle_compression_tx; + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + /* Process the remaining messages on the ring */ + SalQatMsg_updateQueueTail(trans_handle); + QAT_UTILS_LOG( + "There are remaining messages on the ring\n"); + return CPA_STATUS_RETRY; + } + + /* Check if there are stateless pending requests */ + if (0 != pSessionDesc->pendingDpStatelessCbCount) { + QAT_UTILS_LOG( + "There are %llu stateless DP requests pending.\n", + (unsigned long long) + pSessionDesc->pendingDpStatelessCbCount); + return CPA_STATUS_RETRY; + } + } else { + numPendingStateless = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatelessCbCount)); + numPendingStateful = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatefulCbCount)); + /* Check if there are stateless pending requests */ + if (0 != numPendingStateless) { + QAT_UTILS_LOG( + "There are %llu stateless requests pending.\n", + (unsigned long long)numPendingStateless); + return CPA_STATUS_RETRY; + } + /* Check if there are stateful pending requests */ + if (0 != numPendingStateful) { + QAT_UTILS_LOG( + "There are %llu stateful requests pending.\n", + (unsigned long long)numPendingStateful); + return CPA_STATUS_RETRY; + } + + /* Reset pSessionDesc */ + pSessionDesc->requestType = DC_REQUEST_FIRST; + pSessionDesc->cumulativeConsumedBytes = 0; + if (CPA_DC_ADLER32 == pSessionDesc->checksumType) { + pSessionDesc->previousChecksum = 1; + } else { + pSessionDesc->previousChecksum = 0; + } + } + /* Reset the pending callback counters */ + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatelessCbCount); + qatUtilsAtomicSet(0, &pSessionDesc->pendingStatefulCbCount); + pSessionDesc->pendingDpStatelessCbCount = 0; + if (CPA_DC_STATEFUL == pSessionDesc->sessState) { + LAC_OS_BZERO(&pSessionDesc->stateRegistersComp, + sizeof(pSessionDesc->stateRegistersComp)); + LAC_OS_BZERO(&pSessionDesc->stateRegistersDecomp, + sizeof(pSessionDesc->stateRegistersDecomp)); + } + return status; +} + +CpaStatus +cpaDcRemoveSession(const CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + dc_session_desc_t *pSessionDesc = NULL; + Cpa64U numPendingStateless = 0; + Cpa64U numPendingStateful = 0; + icp_comms_trans_handle trans_handle = NULL; + + LAC_CHECK_NULL_PARAM(pSessionHandle); + pSessionDesc = DC_SESSION_DESC_FROM_CTX_GET(pSessionHandle); + LAC_CHECK_NULL_PARAM(pSessionDesc); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + insHandle = dcInstance; + } else { + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + } + + LAC_CHECK_NULL_PARAM(insHandle); + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + /* Check if SAL is running otherwise return an error */ + SAL_RUNNING_CHECK(insHandle); + + if (CPA_TRUE == pSessionDesc->isDcDp) { + trans_handle = ((sal_compression_service_t *)insHandle) + ->trans_handle_compression_tx; + + if (CPA_TRUE == icp_adf_queueDataToSend(trans_handle)) { + /* Process the remaining messages on the ring */ + SalQatMsg_updateQueueTail(trans_handle); + QAT_UTILS_LOG( + "There are remaining messages on the ring.\n"); + return CPA_STATUS_RETRY; + } + + /* Check if there are stateless pending requests */ + if (0 != pSessionDesc->pendingDpStatelessCbCount) { + QAT_UTILS_LOG( + "There are %llu stateless DP requests pending.\n", + (unsigned long long) + pSessionDesc->pendingDpStatelessCbCount); + return CPA_STATUS_RETRY; + } + } else { + numPendingStateless = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatelessCbCount)); + numPendingStateful = + qatUtilsAtomicGet(&(pSessionDesc->pendingStatefulCbCount)); + + /* Check if there are stateless pending requests */ + if (0 != numPendingStateless) { + QAT_UTILS_LOG( + "There are %llu stateless requests pending.\n", + (unsigned long long)numPendingStateless); + status = CPA_STATUS_RETRY; + } + + /* Check if there are stateful pending requests */ + if (0 != numPendingStateful) { + QAT_UTILS_LOG( + "There are %llu stateful requests pending.\n", + (unsigned long long)numPendingStateful); + status = CPA_STATUS_RETRY; + } + if ((CPA_DC_STATEFUL == pSessionDesc->sessState) && + (CPA_STATUS_SUCCESS == status)) { + if (CPA_STATUS_SUCCESS != + LAC_SPINLOCK_DESTROY( + &(pSessionDesc->sessionLock))) { + QAT_UTILS_LOG( + "Failed to destory session lock.\n"); + } + } + } + + return status; +} + +CpaStatus +dcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize) +{ + + CpaStatus status = CPA_STATUS_SUCCESS; + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + /* Check parameters */ + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pSessionData); + LAC_CHECK_NULL_PARAM(pSessionSize); + + if (dcCheckSessionData(pSessionData, insHandle) != CPA_STATUS_SUCCESS) { + return CPA_STATUS_INVALID_PARAM; + } + + /* Get session size for session data */ + *pSessionSize = sizeof(dc_session_desc_t) + LAC_64BYTE_ALIGNMENT + + sizeof(LAC_ARCH_UINT); + + if (NULL != pContextSize) { + status = + dcGetContextSize(insHandle, pSessionData, pContextSize); + + if (CPA_STATUS_SUCCESS != status) { + QAT_UTILS_LOG( + "Unable to get the context size of the session.\n"); + return CPA_STATUS_FAIL; + } + } + + return CPA_STATUS_SUCCESS; +} + +CpaStatus +cpaDcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize) +{ + + LAC_CHECK_NULL_PARAM(pContextSize); + + return dcGetSessionSize(dcInstance, + pSessionData, + pSessionSize, + pContextSize); +} diff --git a/sys/dev/qat/qat_api/common/compression/dc_stats.c b/sys/dev/qat/qat_api/common/compression/dc_stats.c new file mode 100644 index 000000000000..bcd3d61cb3c6 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/dc_stats.c @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_stats.c + * + * @ingroup Dc_DataCompression + * + * @description + * Implementation of the Data Compression stats operations. + * + *****************************************************************************/ + +/* + ******************************************************************************* + * Include public/global header files + ******************************************************************************* + */ +#include "cpa.h" +#include "cpa_dc.h" +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +/* + ******************************************************************************* + * Include private header files + ******************************************************************************* + */ +#include "lac_common.h" +#include "icp_accel_devices.h" +#include "sal_statistics.h" +#include "dc_session.h" +#include "dc_datapath.h" +#include "lac_mem_pools.h" +#include "sal_service_state.h" +#include "sal_types_compression.h" +#include "dc_stats.h" + +CpaStatus +dcStatsInit(sal_compression_service_t *pService) +{ + CpaStatus status = CPA_STATUS_SUCCESS; + + pService->pCompStatsArr = + LAC_OS_MALLOC(COMPRESSION_NUM_STATS * sizeof(QatUtilsAtomic)); + + if (pService->pCompStatsArr == NULL) { + status = CPA_STATUS_RESOURCE; + } + + if (CPA_STATUS_SUCCESS == status) { + COMPRESSION_STATS_RESET(pService); + } + + return status; +} + +void +dcStatsFree(sal_compression_service_t *pService) +{ + if (NULL != pService->pCompStatsArr) { + LAC_OS_FREE(pService->pCompStatsArr); + } +} + +CpaStatus +cpaDcGetStats(CpaInstanceHandle dcInstance, CpaDcStats *pStatistics) +{ + sal_compression_service_t *pService = NULL; + CpaInstanceHandle insHandle = NULL; + + if (CPA_INSTANCE_HANDLE_SINGLE == dcInstance) { + insHandle = dcGetFirstHandle(); + } else { + insHandle = dcInstance; + } + + pService = (sal_compression_service_t *)insHandle; + + LAC_CHECK_NULL_PARAM(insHandle); + LAC_CHECK_NULL_PARAM(pStatistics); + SAL_RUNNING_CHECK(insHandle); + + SAL_CHECK_INSTANCE_TYPE(insHandle, SAL_SERVICE_TYPE_COMPRESSION); + + /* Retrieves the statistics for compression */ + COMPRESSION_STATS_GET(pStatistics, pService); + + return CPA_STATUS_SUCCESS; +} diff --git a/sys/dev/qat/qat_api/common/compression/icp_sal_dc_err.c b/sys/dev/qat/qat_api/common/compression/icp_sal_dc_err.c new file mode 100644 index 000000000000..6b3735b4051b --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/icp_sal_dc_err.c @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file icp_sal_dc_err.c + * + * @defgroup SalCommon + * + * @ingroup SalCommon + * + *****************************************************************************/ + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ +#include "cpa.h" +#include "icp_sal.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "dc_error_counter.h" + +Cpa64U +icp_sal_get_dc_error(Cpa8S dcError) +{ + return getDcErrorCounter(dcError); +} diff --git a/sys/dev/qat/qat_api/common/compression/include/dc_datapath.h b/sys/dev/qat/qat_api/common/compression/include/dc_datapath.h new file mode 100644 index 000000000000..0a6ef7191704 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/include/dc_datapath.h @@ -0,0 +1,186 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_datapath.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression datapath parameters. + * + ******************* + * **********************************************************/ +#ifndef DC_DATAPATH_H_ +#define DC_DATAPATH_H_ + +#define LAC_QAT_DC_REQ_SZ_LW 32 +#define LAC_QAT_DC_RESP_SZ_LW 8 + +/* Restriction on the source buffer size for compression due to the firmware + * processing */ +#define DC_SRC_BUFFER_MIN_SIZE (15) + +/* Restriction on the destination buffer size for compression due to + * the management of skid buffers in the firmware */ +#define DC_DEST_BUFFER_DYN_MIN_SIZE (128) +#define DC_DEST_BUFFER_STA_MIN_SIZE (64) +/* C62x and C3xxx pcie rev0 devices require an additional 32bytes */ +#define DC_DEST_BUFFER_STA_ADDITIONAL_SIZE (32) + +/* C4xxx device only requires 47 bytes */ +#define DC_DEST_BUFFER_MIN_SIZE (47) + +/* Minimum destination buffer size for decompression */ +#define DC_DEST_BUFFER_DEC_MIN_SIZE (1) + +/* Restriction on the source and destination buffer sizes for compression due + * to the firmware taking 32 bits parameters. The max size is 2^32-1 */ +#define DC_BUFFER_MAX_SIZE (0xFFFFFFFF) + +/* DC Source & Destination buffer type (FLAT/SGL) */ +#define DC_DEFAULT_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_SGL +#define DC_DP_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT + +/* Offset to first byte of Input Byte Counter (IBC) in state register */ +#define DC_STATE_IBC_OFFSET (8) +/* Size in bytes of input byte counter (IBC) in state register */ +#define DC_IBC_SIZE_IN_BYTES (4) + +/* Offset to first byte to CRC32 in state register */ +#define DC_STATE_CRC32_OFFSET (40) +/* Offset to first byte to output CRC32 in state register */ +#define DC_STATE_OUTPUT_CRC32_OFFSET (48) +/* Offset to first byte to input CRC32 in state register */ +#define DC_STATE_INPUT_CRC32_OFFSET (52) + +/* Offset to first byte of ADLER32 in state register */ +#define DC_STATE_ADLER32_OFFSET (48) + +/* 8 bit mask value */ +#define DC_8_BIT_MASK (0xff) + +/* 8 bit shift position */ +#define DC_8_BIT_SHIFT_POS (8) + +/* Size in bytes of checksum */ +#define DC_CHECKSUM_SIZE_IN_BYTES (4) + +/* Mask used to set the most significant bit to zero */ +#define DC_STATE_REGISTER_ZERO_MSB_MASK (0x7F) + +/* Mask used to keep only the most significant bit and set the others to zero */ +#define DC_STATE_REGISTER_KEEP_MSB_MASK (0x80) + +/* Compression state register word containing the parity bit */ +#define DC_STATE_REGISTER_PARITY_BIT_WORD (5) + +/* Location of the parity bit within the compression state register word */ +#define DC_STATE_REGISTER_PARITY_BIT (7) + +/* size which needs to be reserved before the results field to + * align the results field with the API struct */ +#define DC_API_ALIGNMENT_OFFSET (offsetof(CpaDcDpOpData, results)) + +/* Mask used to check the CompressAndVerify capability bit */ +#define DC_CNV_EXTENDED_CAPABILITY (0x01) + +/* Mask used to check the CompressAndVerifyAndRecover capability bit */ +#define DC_CNVNR_EXTENDED_CAPABILITY (0x100) + +/* Default values for CNV integrity checks, + * those are used to inform hardware of specifying CRC parameters to be used + * when calculating CRCs */ +#define DC_CRC_POLY_DEFAULT 0x04c11db7 +#define DC_XOR_FLAGS_DEFAULT 0xe0000 +#define DC_XOR_OUT_DEFAULT 0xffffffff +#define DC_INVALID_CRC 0x0 + +/** +******************************************************************************* +* @ingroup cpaDc Data Compression +* Compression cookie +* @description +* This cookie stores information for a particular compression perform op. +* This includes various user-supplied parameters for the operation which +* will be needed in our callback function. +* A pointer to this cookie is stored in the opaque data field of the QAT +* message so that it can be accessed in the asynchronous callback. +* @note +* The order of the parameters within this structure is important. It needs +* to match the order of the parameters in CpaDcDpOpData up to the +* pSessionHandle. This allows the correct processing of the callback. +*****************************************************************************/ +typedef struct dc_compression_cookie_s { + Cpa8U dcReqParamsBuffer[DC_API_ALIGNMENT_OFFSET]; + /**< Memory block - was previously reserved for request parameters. + * Now size maintained so following members align with API struct, + * but no longer used for request parameters */ + CpaDcRqResults reserved; + /**< This is reserved for results to correctly align the structure + * to match the one from the data plane API */ + CpaInstanceHandle dcInstance; + /**< Compression instance handle */ + CpaDcSessionHandle pSessionHandle; + /**< Pointer to the session handle */ + icp_qat_fw_comp_req_t request; + /**< Compression request */ + void *callbackTag; + /**< Opaque data supplied by the client */ + dc_session_desc_t *pSessionDesc; + /**< Pointer to the session descriptor */ + CpaDcFlush flushFlag; + /**< Flush flag */ + CpaDcOpData *pDcOpData; + /**< struct containing flags and CRC related data for this session */ + CpaDcRqResults *pResults; + /**< Pointer to result buffer holding consumed and produced data */ + Cpa32U srcTotalDataLenInBytes; + /**< Total length of the source data */ + Cpa32U dstTotalDataLenInBytes; + /**< Total length of the destination data */ + dc_request_dir_t compDecomp; + /**< Used to know whether the request is compression or decompression. + * Useful when defining the session as combined */ + CpaBufferList *pUserSrcBuff; + /**< virtual userspace ptr to source SGL */ + CpaBufferList *pUserDestBuff; + /**< virtual userspace ptr to destination SGL */ +} dc_compression_cookie_t; + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Callback function called for compression and decompression requests in + * asynchronous mode + * + * @description + * Called to process compression and decompression response messages. This + * callback will check for errors, update the statistics and will call the + * user callback + * + * @param[in] pRespMsg Response message + * + *****************************************************************************/ +void dcCompression_ProcessCallback(void *pRespMsg); + +/** +***************************************************************************** +* @ingroup Dc_DataCompression +* Describes CNV and CNVNR modes +* +* @description +* This enum is used to indicate the CNV modes. +* +*****************************************************************************/ +typedef enum dc_cnv_mode_s { + DC_NO_CNV = 0, + /* CNV = FALSE, CNVNR = FALSE */ + DC_CNV, + /* CNV = TRUE, CNVNR = FALSE */ + DC_CNVNR, + /* CNV = TRUE, CNVNR = TRUE */ +} dc_cnv_mode_t; + +#endif /* DC_DATAPATH_H_ */ diff --git a/sys/dev/qat/qat_api/common/compression/include/dc_error_counter.h b/sys/dev/qat/qat_api/common/compression/include/dc_error_counter.h new file mode 100644 index 000000000000..dd1189fd970a --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/include/dc_error_counter.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_error_counter.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression Error Counter parameters. + * + *****************************************************************************/ +#ifndef DC_ERROR_COUNTER_H +#define DC_ERROR_COUNTER_H + +#include "cpa_types.h" +#include "cpa_dc.h" + +#define MAX_DC_ERROR_TYPE 20 + +void dcErrorLog(CpaDcReqStatus dcError); +Cpa64U getDcErrorCounter(CpaDcReqStatus dcError); + +#endif /* DC_ERROR_COUNTER_H */ diff --git a/sys/dev/qat/qat_api/common/compression/include/dc_header_footer.h b/sys/dev/qat/qat_api/common/compression/include/dc_header_footer.h new file mode 100644 index 000000000000..0ec2cc6f3f16 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/include/dc_header_footer.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_header_footer.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression header and footer parameters. + * + *****************************************************************************/ +#ifndef DC_HEADER_FOOTER_H_ +#define DC_HEADER_FOOTER_H_ + +/* Header and footer sizes for Zlib and Gzip */ +#define DC_ZLIB_HEADER_SIZE (2) +#define DC_GZIP_HEADER_SIZE (10) +#define DC_ZLIB_FOOTER_SIZE (4) +#define DC_GZIP_FOOTER_SIZE (8) + +/* Values used to build the headers for Zlib and Gzip */ +#define DC_GZIP_ID1 (0x1f) +#define DC_GZIP_ID2 (0x8b) +#define DC_GZIP_FILESYSTYPE (0x03) +#define DC_ZLIB_WINDOWSIZE_OFFSET (4) +#define DC_ZLIB_FLEVEL_OFFSET (6) +#define DC_ZLIB_HEADER_OFFSET (31) + +/* Compression level for Zlib */ +#define DC_ZLIB_LEVEL_0 (0) +#define DC_ZLIB_LEVEL_1 (1) +#define DC_ZLIB_LEVEL_2 (2) +#define DC_ZLIB_LEVEL_3 (3) + +/* CM parameter for Zlib */ +#define DC_ZLIB_CM_DEFLATE (8) + +/* Type of Gzip compression */ +#define DC_GZIP_FAST_COMP (4) +#define DC_GZIP_MAX_COMP (2) + +#endif /* DC_HEADER_FOOTER_H_ */ diff --git a/sys/dev/qat/qat_api/common/compression/include/dc_session.h b/sys/dev/qat/qat_api/common/compression/include/dc_session.h new file mode 100644 index 000000000000..5a4961fadd60 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/include/dc_session.h @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_session.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression session parameters. + * + *****************************************************************************/ +#ifndef DC_SESSION_H +#define DC_SESSION_H + +#include "cpa_dc_dp.h" +#include "icp_qat_fw_comp.h" +#include "sal_qat_cmn_msg.h" + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 6 compression slices */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_6COMP_SLICES (12) + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 10 max compression slices */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_10COMP_SLICES (20) + +/* Maximum number of intermediate buffers SGLs for devices + * with a maximum of 24 max compression slices and 32 MEs */ +#define DC_QAT_MAX_NUM_INTER_BUFFERS_24COMP_SLICES (64) + +/* Maximum size of the state registers 64 bytes */ +#define DC_QAT_STATE_REGISTERS_MAX_SIZE (64) + +/* Size of the history window. + * Base 2 logarithm of maximum window size minus 8 */ +#define DC_8K_WINDOW_SIZE (5) +#define DC_16K_WINDOW_SIZE (6) +#define DC_32K_WINDOW_SIZE (7) + +/* Context size */ +#define DC_DEFLATE_MAX_CONTEXT_SIZE (49152) +#define DC_INFLATE_CONTEXT_SIZE (36864) + +#define DC_DEFLATE_EH_MAX_CONTEXT_SIZE (65536) +#define DC_DEFLATE_EH_MIN_CONTEXT_SIZE (49152) +#define DC_INFLATE_EH_CONTEXT_SIZE (34032) + +/* Retrieve the session descriptor pointer from the session context structure + * that the user allocates. The pointer to the internally realigned address + * is stored at the start of the session context that the user allocates */ +#define DC_SESSION_DESC_FROM_CTX_GET(pSession) \ + (dc_session_desc_t *)(*(LAC_ARCH_UINT *)pSession) + +/* Maximum size for the compression part of the content descriptor */ +#define DC_QAT_COMP_CONTENT_DESC_SIZE sizeof(icp_qat_fw_comp_cd_hdr_t) + +/* Maximum size for the translator part of the content descriptor */ +#define DC_QAT_TRANS_CONTENT_DESC_SIZE \ + (sizeof(icp_qat_fw_xlt_cd_hdr_t) + DC_QAT_MAX_TRANS_SETUP_BLK_SZ) + +/* Maximum size of the decompression content descriptor */ +#define DC_QAT_CONTENT_DESC_DECOMP_MAX_SIZE \ + LAC_ALIGN_POW2_ROUNDUP(DC_QAT_COMP_CONTENT_DESC_SIZE, \ + (1 << LAC_64BYTE_ALIGNMENT_SHIFT)) + +/* Maximum size of the compression content descriptor */ +#define DC_QAT_CONTENT_DESC_COMP_MAX_SIZE \ + LAC_ALIGN_POW2_ROUNDUP(DC_QAT_COMP_CONTENT_DESC_SIZE + \ + DC_QAT_TRANS_CONTENT_DESC_SIZE, \ + (1 << LAC_64BYTE_ALIGNMENT_SHIFT)) + +/* Direction of the request */ +typedef enum dc_request_dir_e { + DC_COMPRESSION_REQUEST = 1, + DC_DECOMPRESSION_REQUEST +} dc_request_dir_t; + +/* Type of the compression request */ +typedef enum dc_request_type_e { + DC_REQUEST_FIRST = 1, + DC_REQUEST_SUBSEQUENT +} dc_request_type_t; + +typedef enum dc_block_type_e { + DC_CLEARTEXT_TYPE = 0, + DC_STATIC_TYPE, + DC_DYNAMIC_TYPE +} dc_block_type_t; + +/* Internal data structure supporting end to end data integrity checks. */ +typedef struct dc_integrity_crc_fw_s { + Cpa32U crc32; + /* CRC32 checksum returned for compressed data */ + Cpa32U adler32; + /* ADLER32 checksum returned for compressed data */ + Cpa32U oCrc32Cpr; + /* CRC32 checksum returned for data output by compression accelerator */ + Cpa32U iCrc32Cpr; + /* CRC32 checksum returned for input data to compression accelerator */ + Cpa32U oCrc32Xlt; + /* CRC32 checksum returned for data output by translator accelerator */ + Cpa32U iCrc32Xlt; + /* CRC32 checksum returned for input data to translator accelerator */ + Cpa32U xorFlags; + /* Initialise transactor pCRC controls in state register */ + Cpa32U crcPoly; + /* CRC32 polynomial used by hardware */ + Cpa32U xorOut; + /* CRC32 from XOR stage (Input CRC is xor'ed with value in the state) */ + Cpa32U deflateBlockType; + /* Bit 1 - Bit 0 + * 0 0 -> RAW DATA + Deflate header. + * This will not produced any CRC check because + * the output will not come from the slices. + * It will be a simple copy from input to output + * buffers list. + * 0 1 -> Static deflate block type + * 1 0 -> Dynamic deflate block type + * 1 1 -> Invalid type */ +} dc_integrity_crc_fw_t; + +typedef struct dc_sw_checksums_s { + Cpa32U swCrcI; + Cpa32U swCrcO; +} dc_sw_checksums_t; + +/* Session descriptor structure for compression */ +typedef struct dc_session_desc_s { + Cpa8U stateRegistersComp[DC_QAT_STATE_REGISTERS_MAX_SIZE]; + /**< State registers for compression */ + Cpa8U stateRegistersDecomp[DC_QAT_STATE_REGISTERS_MAX_SIZE]; + /**< State registers for decompression */ + icp_qat_fw_comp_req_t reqCacheComp; + /**< Cache as much as possible of the compression request in a pre-built + * request */ + icp_qat_fw_comp_req_t reqCacheDecomp; + /**< Cache as much as possible of the decompression request in a + * pre-built + * request */ + dc_request_type_t requestType; + /**< Type of the compression request. As stateful mode do not support + * more + * than one in-flight request there is no need to use spinlocks */ + dc_request_type_t previousRequestType; + /**< Type of the previous compression request. Used in cases where there + * the + * stateful operation needs to be resubmitted */ + CpaDcHuffType huffType; + /**< Huffman tree type */ + CpaDcCompType compType; + /**< Compression type */ + CpaDcChecksum checksumType; + /**< Type of checksum */ + CpaDcAutoSelectBest autoSelectBestHuffmanTree; + /**< Indicates if the implementation selects the best Huffman encoding + */ + CpaDcSessionDir sessDirection; + /**< Session direction */ + CpaDcSessionState sessState; + /**< Session state */ + Cpa32U deflateWindowSize; + /**< Window size */ + CpaDcCompLvl compLevel; + /**< Compression level */ + CpaDcCallbackFn pCompressionCb; + /**< Callback function defined for the traditional compression session + */ + QatUtilsAtomic pendingStatelessCbCount; + /**< Keeps track of number of pending requests on stateless session */ + QatUtilsAtomic pendingStatefulCbCount; + /**< Keeps track of number of pending requests on stateful session */ + Cpa64U pendingDpStatelessCbCount; + /**< Keeps track of number of data plane pending requests on stateless + * session */ + struct mtx sessionLock; + /**< Lock used to provide exclusive access for number of stateful + * in-flight + * requests update */ + CpaBoolean isDcDp; + /**< Indicates if the data plane API is used */ + Cpa32U minContextSize; + /**< Indicates the minimum size required to allocate the context buffer + */ + CpaBufferList *pContextBuffer; + /**< Context buffer */ + Cpa32U historyBuffSize; + /**< Size of the history buffer */ + Cpa64U cumulativeConsumedBytes; + /**< Cumulative amount of consumed bytes. Used to build the footer in + * the + * stateful case */ + Cpa32U previousChecksum; + /**< Save the previous value of the checksum. Used to process zero byte + * stateful compression or decompression requests */ + CpaBoolean isSopForCompressionProcessed; + /**< Indicates whether a Compression Request is received in this session + */ + CpaBoolean isSopForDecompressionProcessed; + /**< Indicates whether a Decompression Request is received in this + * session + */ + /**< Data integrity table */ + dc_integrity_crc_fw_t dataIntegrityCrcs; + /**< Physical address of Data integrity buffer */ + CpaPhysicalAddr physDataIntegrityCrcs; + /* Seed checksums structure used to calculate software calculated + * checksums. + */ + dc_sw_checksums_t seedSwCrc; + /* Driver calculated integrity software CRC */ + dc_sw_checksums_t integritySwCrc; +} dc_session_desc_t; + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Initialise a compression session + * + * @description + * This function will initialise a compression session + * + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in,out] pSessionHandle Pointer to a session handle + * @param[in,out] pSessionData Pointer to a user instantiated structure + * containing session data + * @param[in] pContextBuffer Pointer to context buffer + * + * @param[in] callbackFn For synchronous operation this callback + * shall be a null pointer + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + * @retval CPA_STATUS_RESOURCE Error related to system resources + *****************************************************************************/ +CpaStatus dcInitSession(CpaInstanceHandle dcInstance, + CpaDcSessionHandle pSessionHandle, + CpaDcSessionSetupData *pSessionData, + CpaBufferList *pContextBuffer, + CpaDcCallbackFn callbackFn); + +/** + ***************************************************************************** + * @ingroup Dc_DataCompression + * Get the size of the memory required to hold the session information + * + * @description + * This function will get the size of the memory required to hold the + * session information + * + * @param[in] dcInstance Instance handle derived from discovery + * functions + * @param[in] pSessionData Pointer to a user instantiated structure + * containing session data + * @param[out] pSessionSize On return, this parameter will be the size + * of the memory that will be + * required by cpaDcInitSession() for session + * data. + * @param[out] pContextSize On return, this parameter will be the size + * of the memory that will be required + * for context data. Context data is + * save/restore data including history and + * any implementation specific data that is + * required for a save/restore operation. + * + * @retval CPA_STATUS_SUCCESS Function executed successfully + * @retval CPA_STATUS_FAIL Function failed + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in + *****************************************************************************/ +CpaStatus dcGetSessionSize(CpaInstanceHandle dcInstance, + CpaDcSessionSetupData *pSessionData, + Cpa32U *pSessionSize, + Cpa32U *pContextSize); + +#endif /* DC_SESSION_H */ diff --git a/sys/dev/qat/qat_api/common/compression/include/dc_stats.h b/sys/dev/qat/qat_api/common/compression/include/dc_stats.h new file mode 100644 index 000000000000..357be30107b1 --- /dev/null +++ b/sys/dev/qat/qat_api/common/compression/include/dc_stats.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ +/** + ***************************************************************************** + * @file dc_stats.h + * + * @ingroup Dc_DataCompression + * + * @description + * Definition of the Data Compression stats parameters. + * + *****************************************************************************/ +#ifndef DC_STATS_H_ +#define DC_STATS_H_ + +/* Number of Compression statistics */ +#define COMPRESSION_NUM_STATS (sizeof(CpaDcStats) / sizeof(Cpa64U)) + +#define COMPRESSION_STAT_INC(statistic, pService) \ + do { \ + if (CPA_TRUE == \ + pService->generic_service_info.stats->bDcStatsEnabled) { \ + qatUtilsAtomicInc( \ + &pService->pCompStatsArr[offsetof(CpaDcStats, \ + statistic) / \ + sizeof(Cpa64U)]); \ + } \ + } while (0) + +/* Macro to get all Compression stats (from internal array of atomics) */ +#define COMPRESSION_STATS_GET(compStats, pService) \ + do { \ + int i; \ + for (i = 0; i < COMPRESSION_NUM_STATS; i++) { \ + ((Cpa64U *)compStats)[i] = \ + qatUtilsAtomicGet(&pService->pCompStatsArr[i]); \ + } \ + } while (0) + +/* Macro to reset all Compression stats */ +#define COMPRESSION_STATS_RESET(pService) \ + do { \ + int i; \ + for (i = 0; i < COMPRESSION_NUM_STATS; i++) { \ + qatUtilsAtomicSet(0, &pService->pCompStatsArr[i]); \ + } \ + } while (0) + +/** +******************************************************************************* +* @ingroup Dc_DataCompression +* Initialises the compression stats +* +* @description +* This function allocates and initialises the stats array to 0 +* +* @param[in] pService Pointer to a compression service structure +* +* @retval CPA_STATUS_SUCCESS initialisation successful +* @retval CPA_STATUS_RESOURCE array allocation failed +* +*****************************************************************************/ +CpaStatus dcStatsInit(sal_compression_service_t *pService); + +/** +******************************************************************************* +* @ingroup Dc_DataCompression +* Frees the compression stats +* +* @description +* This function frees the stats array +* +* @param[in] pService Pointer to a compression service structure +* +* @retval None +* +*****************************************************************************/ +void dcStatsFree(sal_compression_service_t *pService); + +#endif /* DC_STATS_H_ */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_session.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_session.h new file mode 100644 index 000000000000..6ae3c51e7766 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_session.h @@ -0,0 +1,622 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_session.h + * + * @defgroup LacSym_Session Session + * + * @ingroup LacSym + * + * Definition of symmetric session descriptor structure + * + * @lld_start + * + * @lld_overview + * A session is required for each symmetric operation. The session descriptor + * holds information about the session from when the session is initialised to + * when the session is removed. The session descriptor is used in the + * subsequent perform operations in the paths for both sending the request and + * receiving the response. The session descriptor and any other state + * information required for processing responses from the QAT are stored in an + * internal cookie. A pointer to this cookie is stored in the opaque data + * field of the QAT request. + * + * The user allocates the memory for the session using the size returned from + * \ref cpaCySymSessionCtxGetSize(). Internally this memory is re-aligned on a + * 64 byte boundary for use by the QAT engine. The aligned pointer is saved in + * the first bytes (size of void *) of the session memory. This address + * is then dereferenced in subsequent performs to get access to the session + * descriptor. + * + * <b>LAC Session Init</b>\n The session descriptor is re-aligned and + * populated. This includes populating the content descriptor which contains + * the hardware setup for the QAT engine. The content descriptor is a read + * only structure after session init and a pointer to it is sent to the QAT + * for each perform operation. + * + * <b>LAC Perform </b>\n + * The address for the session descriptor is got by dereferencing the first + * bytes of the session memory (size of void *). For each successful + * request put on the ring, the pendingCbCount for the session is incremented. + * + * <b>LAC Callback </b>\n + * For each successful response the pendingCbCount for the session is + * decremented. See \ref LacSymCb_ProcessCallbackInternal() + * + * <b>LAC Session Remove </b>\n + * The address for the session descriptor is got by dereferencing the first + * bytes of the session memory (size of void *). + * The pendingCbCount for the session is checked to see if it is 0. If it is + * non 0 then there are requests in flight. An error is returned to the user. + * + * <b>Concurrency</b>\n + * A reference count is used to prevent the descriptor being removed + * while there are requests in flight. + * + * <b>Reference Count</b>\n + * - The perform funcion increments the reference count for the session. + * - The callback function decrements the reference count for the session. + * - The Remove function checks the reference count to ensure that it is 0. + * + * @lld_dependencies + * - \ref LacMem "Memory" - Inline memory functions + * - QatUtils: logging, locking & virt to phys translations. + * + * @lld_initialisation + * + * @lld_module_algorithms + * + * @lld_process_context + * + * @lld_end + * + *****************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_SESSION_H +#define LAC_SYM_SESSION_H + +/* + * Common alignment attributes to ensure + * hashStatePrefixBuffer is 64-byte aligned + */ +#define ALIGN_START(x) +#define ALIGN_END(x) __attribute__((__aligned__(x))) +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "icp_accel_devices.h" +#include "lac_list.h" +#include "lac_sal_types.h" +#include "sal_qat_cmn_msg.h" +#include "lac_sym_cipher_defs.h" +#include "lac_sym.h" +#include "lac_sym_hash_defs.h" +#include "lac_sym_qat_hash.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +/** +******************************************************************************* +* @ingroup LacSym_Session +* Symmetric session descriptor +* @description +* This structure stores information about a session +* Note: struct types lac_session_d1_s and lac_session_d2_s are subsets of +* this structure. Elements in all three should retain the same order +* Only this structure is used in the session init call. The other two are +* for determining the size of memory to allocate. +* The comments section of each of the other two structures below show +* the conditions that determine which session context memory size to use. +*****************************************************************************/ +typedef struct lac_session_desc_s { + Cpa8U contentDescriptor[LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE]; + /**< QAT Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U contentDescriptorOptimised[LAC_SYM_OPTIMISED_CD_SIZE]; + /**< QAT Optimised Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaCySymOp symOperation; + /**< type of command to be performed */ + sal_qat_content_desc_info_t contentDescInfo; + /**< info on the content descriptor */ + sal_qat_content_desc_info_t contentDescOptimisedInfo; + /**< info on the optimised content descriptor */ + icp_qat_fw_la_cmd_id_t laCmdId; + /**<Command Id for the QAT FW */ + lac_sym_qat_hash_state_buffer_info_t hashStateBufferInfo; + /**< info on the hash state prefix buffer */ + CpaCySymHashAlgorithm hashAlgorithm; + /**< hash algorithm */ + Cpa32U authKeyLenInBytes; + /**< Authentication key length in bytes */ + CpaCySymHashMode hashMode; + /**< Mode of the hash operation. plain, auth or nested */ + Cpa32U hashResultSize; + /**< size of the digest produced/compared in bytes */ + CpaCySymCipherAlgorithm cipherAlgorithm; + /**< Cipher algorithm and mode */ + Cpa32U cipherKeyLenInBytes; + /**< Cipher key length in bytes */ + CpaCySymCipherDirection cipherDirection; + /**< This parameter determines if the cipher operation is an encrypt or + * a decrypt operation. */ + CpaCySymPacketType partialState; + /**< state of the partial packet. This can be written to by the perform + * because the SpinLock pPartialInFlightSpinlock guarantees that the + * state is accessible in only one place at a time. */ + icp_qat_la_bulk_req_hdr_t reqCacheHdr; + icp_qat_fw_la_key_gen_common_t reqCacheMid; + icp_qat_la_bulk_req_ftr_t reqCacheFtr; + /**< Cache as much as possible of the bulk request in a pre built + * request (header, mid & footer). */ + CpaCySymCbFunc pSymCb; + /**< symmetric function callback pointer */ + union { + QatUtilsAtomic pendingCbCount; + /**< Keeps track of number of pending requests. */ + QatUtilsAtomic pendingDpCbCount; + /**< Keeps track of number of pending DP requests (not thread + * safe)*/ + } u; + struct lac_sym_bulk_cookie_s *pRequestQueueHead; + /**< A fifo list of queued QAT requests. Head points to first queue + * entry */ + struct lac_sym_bulk_cookie_s *pRequestQueueTail; + /**< A fifo list of queued QAT requests. Tail points to last queue entry + */ + struct mtx requestQueueLock; + /**< A lock to protect accesses to the above request queue */ + CpaInstanceHandle pInstance; + /**< Pointer to Crypto instance running this session. */ + CpaBoolean isAuthEncryptOp : 1; + /**< if the algorithm chaining operation is auth encrypt */ + CpaBoolean nonBlockingOpsInProgress : 1; + /**< Flag is set if a non blocking operation is in progress for a + * session. + * If set to false, new requests will be queued until the condition is + * cleared. + * ASSUMPTION: Only one blocking condition per session can exist at any + * time + */ + CpaBoolean internalSession : 1; + /**< Flag which is set if the session was set up internally for DRBG */ + CpaBoolean isDPSession : 1; + /**< Flag which is set if the session was set up for Data Plane */ + CpaBoolean digestVerify : 1; + /**< Session digest verify for data plane and for CCM/GCM for trad + * api. For other cases on trad api this flag is set in each performOp + */ + CpaBoolean digestIsAppended : 1; + /**< Flag indicating whether the digest is appended immediately + * following + * the region over which the digest is computed */ + CpaBoolean isCipher : 1; + /**< Flag indicating whether symOperation includes a cipher operation */ + CpaBoolean isAuth : 1; + /**< Flag indicating whether symOperation includes an auth operation */ + CpaBoolean useSymConstantsTable : 1; + /**< Flag indicating whether the SymConstantsTable can be used or not */ + CpaBoolean useOptimisedContentDesc : 1; + /**< Flag indicating whether to use the optimised CD or not */ + icp_qat_la_bulk_req_hdr_t shramReqCacheHdr; + icp_qat_fw_la_key_gen_common_t shramReqCacheMid; + icp_qat_la_bulk_req_ftr_t shramReqCacheFtr; + /**< Alternative pre-built request (header, mid & footer) + * for use with symConstantsTable. */ + CpaBoolean isPartialSupported : 1; + /**< Flag indicating whether symOperation support partial packet */ + CpaBoolean isSinglePass : 1; + /**< Flag indicating whether symOperation is single pass operation */ + icp_qat_fw_serv_specif_flags laCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_fw_comn_flags cmnRequestFlags; + /**< Common request flags type */ + icp_qat_fw_ext_serv_specif_flags laExtCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_la_bulk_req_hdr_t reqSpcCacheHdr; + icp_qat_la_bulk_req_ftr_t reqSpcCacheFtr; + /**< request (header & footer)for use with Single Pass. */ + icp_qat_hw_auth_mode_t qatHashMode; + /**< Hash Mode for the qat slices. Not to be confused with QA-API + * hashMode + */ + void *writeRingMsgFunc; + /**< function which will be called to write ring message */ + Cpa32U aadLenInBytes; + /**< For CCM,GCM and Snow3G cases, this parameter holds the AAD size, + * otherwise it is set to zero */ + ALIGN_START(64) + Cpa8U hashStatePrefixBuffer[LAC_MAX_AAD_SIZE_BYTES] ALIGN_END(64); + /**< hash state prefix buffer used for hash operations - AAD only + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U hashStatePrefixBufferExt[LAC_MAX_HASH_STATE_BUFFER_SIZE_BYTES - + LAC_MAX_AAD_SIZE_BYTES]; + /**< hash state prefix buffer used for hash operations - Remainder of + * array. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U cipherPartialOpState[LAC_CIPHER_STATE_SIZE_MAX]; + /**< Buffer to hold the cipher state for the session (for partial ops). + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U cipherARC4InitialState[LAC_CIPHER_ARC4_STATE_LEN_BYTES]; + /**< Buffer to hold the initial ARC4 cipher state for the session, which + * is derived from the user-supplied base key during session + * registration. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaPhysicalAddr cipherARC4InitialStatePhysAddr; + /**< The physical address of the ARC4 initial state, set at init + ** session time . + */ +} lac_session_desc_t; + +/** +******************************************************************************* +* @ingroup LacSym_Session +* Symmetric session descriptor - d1 +* @description +* This structure stores information about a specific session which +* assumes the following: +* - cipher algorithm is not ARC4 or Snow3G +* - partials not used +* - not AuthEncrypt operation +* - hash mode not Auth or Nested +* - no hashStatePrefixBuffer required +* It is therefore a subset of the standard symmetric session descriptor, +* with a smaller memory footprint +*****************************************************************************/ +typedef struct lac_session_desc_d1_s { + Cpa8U contentDescriptor[LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE]; + /**< QAT Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U contentDescriptorOptimised[LAC_SYM_OPTIMISED_CD_SIZE]; + /**< QAT Optimised Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaCySymOp symOperation; + /**< type of command to be performed */ + sal_qat_content_desc_info_t contentDescInfo; + /**< info on the content descriptor */ + sal_qat_content_desc_info_t contentDescOptimisedInfo; + /**< info on the optimised content descriptor */ + icp_qat_fw_la_cmd_id_t laCmdId; + /**<Command Id for the QAT FW */ + lac_sym_qat_hash_state_buffer_info_t hashStateBufferInfo; + /**< info on the hash state prefix buffer */ + CpaCySymHashAlgorithm hashAlgorithm; + /**< hash algorithm */ + Cpa32U authKeyLenInBytes; + /**< Authentication key length in bytes */ + CpaCySymHashMode hashMode; + /**< Mode of the hash operation. plain, auth or nested */ + Cpa32U hashResultSize; + /**< size of the digest produced/compared in bytes */ + CpaCySymCipherAlgorithm cipherAlgorithm; + /**< Cipher algorithm and mode */ + Cpa32U cipherKeyLenInBytes; + /**< Cipher key length in bytes */ + CpaCySymCipherDirection cipherDirection; + /**< This parameter determines if the cipher operation is an encrypt or + * a decrypt operation. */ + CpaCySymPacketType partialState; + /**< state of the partial packet. This can be written to by the perform + * because the SpinLock pPartialInFlightSpinlock guarantees that that + * the + * state is accessible in only one place at a time. */ + icp_qat_la_bulk_req_hdr_t reqCacheHdr; + icp_qat_fw_la_key_gen_common_t reqCacheMid; + icp_qat_la_bulk_req_ftr_t reqCacheFtr; + /**< Cache as much as possible of the bulk request in a pre built + * request (header, mid & footer). */ + CpaCySymCbFunc pSymCb; + /**< symmetric function callback pointer */ + union { + QatUtilsAtomic pendingCbCount; + /**< Keeps track of number of pending requests. */ + Cpa64U pendingDpCbCount; + /**< Keeps track of number of pending DP requests (not thread + * safe)*/ + } u; + struct lac_sym_bulk_cookie_s *pRequestQueueHead; + /**< A fifo list of queued QAT requests. Head points to first queue + * entry */ + struct lac_sym_bulk_cookie_s *pRequestQueueTail; + /**< A fifo list of queued QAT requests. Tail points to last queue entry + */ + struct mtx requestQueueLock; + /**< A lock to protect accesses to the above request queue */ + CpaInstanceHandle pInstance; + /**< Pointer to Crypto instance running this session. */ + CpaBoolean isAuthEncryptOp : 1; + /**< if the algorithm chaining operation is auth encrypt */ + CpaBoolean nonBlockingOpsInProgress : 1; + /**< Flag is set if a non blocking operation is in progress for a + * session. + * If set to false, new requests will be queued until the condition is + * cleared. + * ASSUMPTION: Only one blocking condition per session can exist at any + * time + */ + CpaBoolean internalSession : 1; + /**< Flag which is set if the session was set up internally for DRBG */ + CpaBoolean isDPSession : 1; + /**< Flag which is set if the session was set up for Data Plane */ + CpaBoolean digestVerify : 1; + /**< Session digest verify for data plane and for CCM/GCM for trad + * api. For other cases on trad api this flag is set in each performOp + */ + CpaBoolean digestIsAppended : 1; + /**< Flag indicating whether the digest is appended immediately + * following + * the region over which the digest is computed */ + CpaBoolean isCipher : 1; + /**< Flag indicating whether symOperation includes a cipher operation */ + CpaBoolean isAuth : 1; + /**< Flag indicating whether symOperation includes an auth operation */ + CpaBoolean useSymConstantsTable : 1; + /**< Flag indicating whether the SymConstantsTable can be used or not */ + CpaBoolean useOptimisedContentDesc : 1; + /**< Flag indicating whether to use the optimised CD or not */ + icp_qat_la_bulk_req_hdr_t shramReqCacheHdr; + icp_qat_fw_la_key_gen_common_t shramReqCacheMid; + icp_qat_la_bulk_req_ftr_t shramReqCacheFtr; + /**< Alternative pre-built request (header, mid & footer) + * for use with symConstantsTable. */ + CpaBoolean isPartialSupported : 1; + /**< Flag indicating whether symOperation support partial packet */ + CpaBoolean isSinglePass : 1; + /**< Flag indicating whether symOperation is single pass operation */ + icp_qat_fw_serv_specif_flags laCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_fw_comn_flags cmnRequestFlags; + /**< Common request flags type */ + icp_qat_fw_ext_serv_specif_flags laExtCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_la_bulk_req_hdr_t reqSpcCacheHdr; + icp_qat_la_bulk_req_ftr_t reqSpcCacheFtr; + /**< request (header & footer)for use with Single Pass. */ + icp_qat_hw_auth_mode_t qatHashMode; + /**< Hash Mode for the qat slices. Not to be confused with QA-API + * hashMode + */ + void *writeRingMsgFunc; + /**< function which will be called to write ring message */ +} lac_session_desc_d1_t; + +/** +******************************************************************************* +* @ingroup LacSym_Session +* Symmetric session descriptor - d2 +* @description +* This structure stores information about a specific session which +* assumes the following: +* - authEncrypt only +* - partials not used +* - hasStatePrefixBuffer just contains AAD +* It is therefore a subset of the standard symmetric session descriptor, +* with a smaller memory footprint +*****************************************************************************/ +typedef struct lac_session_desc_d2_s { + Cpa8U contentDescriptor[LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE]; + /**< QAT Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + Cpa8U contentDescriptorOptimised[LAC_SYM_OPTIMISED_CD_SIZE]; + /**< QAT Optimised Content Descriptor for this session. + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaCySymOp symOperation; + /**< type of command to be performed */ + sal_qat_content_desc_info_t contentDescInfo; + /**< info on the content descriptor */ + sal_qat_content_desc_info_t contentDescOptimisedInfo; + /**< info on the optimised content descriptor */ + icp_qat_fw_la_cmd_id_t laCmdId; + /**<Command Id for the QAT FW */ + lac_sym_qat_hash_state_buffer_info_t hashStateBufferInfo; + /**< info on the hash state prefix buffer */ + CpaCySymHashAlgorithm hashAlgorithm; + /**< hash algorithm */ + Cpa32U authKeyLenInBytes; + /**< Authentication key length in bytes */ + CpaCySymHashMode hashMode; + /**< Mode of the hash operation. plain, auth or nested */ + Cpa32U hashResultSize; + /**< size of the digest produced/compared in bytes */ + CpaCySymCipherAlgorithm cipherAlgorithm; + /**< Cipher algorithm and mode */ + Cpa32U cipherKeyLenInBytes; + /**< Cipher key length in bytes */ + CpaCySymCipherDirection cipherDirection; + /**< This parameter determines if the cipher operation is an encrypt or + * a decrypt operation. */ + CpaCySymPacketType partialState; + /**< state of the partial packet. This can be written to by the perform + * because the SpinLock pPartialInFlightSpinlock guarantees that that + * the + * state is accessible in only one place at a time. */ + icp_qat_la_bulk_req_hdr_t reqCacheHdr; + icp_qat_fw_la_key_gen_common_t reqCacheMid; + icp_qat_la_bulk_req_ftr_t reqCacheFtr; + /**< Cache as much as possible of the bulk request in a pre built + * request (header. mid & footer). */ + CpaCySymCbFunc pSymCb; + /**< symmetric function callback pointer */ + union { + QatUtilsAtomic pendingCbCount; + /**< Keeps track of number of pending requests. */ + Cpa64U pendingDpCbCount; + /**< Keeps track of number of pending DP requests (not thread + * safe)*/ + } u; + struct lac_sym_bulk_cookie_s *pRequestQueueHead; + /**< A fifo list of queued QAT requests. Head points to first queue + * entry */ + struct lac_sym_bulk_cookie_s *pRequestQueueTail; + /**< A fifo list of queued QAT requests. Tail points to last queue entry + */ + struct mtx requestQueueLock; + /**< A lock to protect accesses to the above request queue */ + CpaInstanceHandle pInstance; + /**< Pointer to Crypto instance running this session. */ + CpaBoolean isAuthEncryptOp : 1; + /**< if the algorithm chaining operation is auth encrypt */ + CpaBoolean nonBlockingOpsInProgress : 1; + /**< Flag is set if a non blocking operation is in progress for a + * session. + * If set to false, new requests will be queued until the condition is + * cleared. + * ASSUMPTION: Only one blocking condition per session can exist at any + * time + */ + CpaBoolean internalSession : 1; + /**< Flag which is set if the session was set up internally for DRBG */ + CpaBoolean isDPSession : 1; + /**< Flag which is set if the session was set up for Data Plane */ + CpaBoolean digestVerify : 1; + /**< Session digest verify for data plane and for CCM/GCM for trad + * api. For other cases on trad api this flag is set in each performOp + */ + CpaBoolean digestIsAppended : 1; + /**< Flag indicating whether the digest is appended immediately + * following + * the region over which the digest is computed */ + CpaBoolean isCipher : 1; + /**< Flag indicating whether symOperation includes a cipher operation */ + CpaBoolean isAuth : 1; + /**< Flag indicating whether symOperation includes an auth operation */ + CpaBoolean useSymConstantsTable : 1; + /**< Flag indicating whether the SymConstantsTable can be used or not */ + CpaBoolean useOptimisedContentDesc : 1; + /**< Flag indicating whether to use the optimised CD or not */ + icp_qat_la_bulk_req_hdr_t shramReqCacheHdr; + icp_qat_fw_la_key_gen_common_t shramReqCacheMid; + icp_qat_la_bulk_req_ftr_t shramReqCacheFtr; + /**< Alternative pre-built request (header. mid & footer) + * for use with symConstantsTable. */ + CpaBoolean isPartialSupported : 1; + /**< Flag indicating whether symOperation support partial packet */ + CpaBoolean isSinglePass : 1; + /**< Flag indicating whether symOperation is single pass operation */ + icp_qat_fw_serv_specif_flags laCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_fw_comn_flags cmnRequestFlags; + /**< Common request flags type */ + icp_qat_fw_ext_serv_specif_flags laExtCmdFlags; + /**< Common request - Service specific flags type */ + icp_qat_la_bulk_req_hdr_t reqSpcCacheHdr; + icp_qat_la_bulk_req_ftr_t reqSpcCacheFtr; + /**< request (header & footer)for use with Single Pass. */ + icp_qat_hw_auth_mode_t qatHashMode; + /**< Hash Mode for the qat slices. Not to be confused with QA-API + * hashMode + */ + void *writeRingMsgFunc; + /**< function which will be called to write ring message */ + Cpa32U aadLenInBytes; + /**< For CCM,GCM and Snow3G cases, this parameter holds the AAD size, + * otherwise it is set to zero */ + ALIGN_START(64) + Cpa8U hashStatePrefixBuffer[LAC_MAX_AAD_SIZE_BYTES] ALIGN_END(64); + /**< hash state prefix buffer used for hash operations - AAD only + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ +} lac_session_desc_d2_t; + +#define LAC_SYM_SESSION_SIZE \ + (sizeof(lac_session_desc_t) + LAC_64BYTE_ALIGNMENT + \ + sizeof(LAC_ARCH_UINT)) +/**< @ingroup LacSym_Session + * Size of the memory that the client has to allocate for a session. Extra + * memory is needed to internally re-align the data. The pointer to the algined + * data is stored at the start of the user allocated memory hence the extra + * space for an LAC_ARCH_UINT */ + +#define LAC_SYM_SESSION_D1_SIZE \ + (sizeof(lac_session_desc_d1_t) + LAC_64BYTE_ALIGNMENT + \ + sizeof(LAC_ARCH_UINT)) +/**< @ingroup LacSym_Session +** Size of the memory that the client has to allocate for a session where : +* - cipher algorithm not ARC4 or Snow3G, no Partials, nonAuthEncrypt. +* Extra memory is needed to internally re-align the data. The pointer to the +* aligned data is stored at the start of the user allocated memory hence the +* extra space for an LAC_ARCH_UINT */ + +#define LAC_SYM_SESSION_D2_SIZE \ + (sizeof(lac_session_desc_d2_t) + LAC_64BYTE_ALIGNMENT + \ + sizeof(LAC_ARCH_UINT)) +/**< @ingroup LacSym_Session +** Size of the memory that the client has to allocate for a session where : +* - authEncrypt, no Partials - so hashStatePrefixBuffer is only AAD +* Extra memory is needed to internally re-align the data. The pointer to the +* aligned data is stored at the start of the user allocated memory hence the +* extra space for an LAC_ARCH_UINT */ + +#define LAC_SYM_SESSION_DESC_FROM_CTX_GET(pSession) \ + (lac_session_desc_t *)(*(LAC_ARCH_UINT *)pSession) +/**< @ingroup LacSym_Session + * Retrieve the session descriptor pointer from the session context structure + * that the user allocates. The pointer to the internally realigned address + * is stored at the start of the session context that the user allocates */ + +/** +******************************************************************************* +* @ingroup LacSym_Session +* This function initializes a session +* +* @description +* This function is called from the LAC session register API functions. +* It validates all input parameters. If an invalid parameter is passed, +* an error is returned to the calling function. If all parameters are valid +* a symmetric session is initialized +* +* @param[in] instanceHandle_in Instance Handle +* @param[in] pSymCb callback function +* @param[in] pSessionSetupData pointer to the strucutre containing the setup +*data +* @param[in] isDpSession CPA_TRUE for a data plane session +* @param[out] pSessionCtx Pointer to session context +* +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resources. +* +*/ + +CpaStatus LacSym_InitSession(const CpaInstanceHandle instanceHandle_in, + const CpaCySymCbFunc pSymCb, + const CpaCySymSessionSetupData *pSessionSetupData, + const CpaBoolean isDpSession, + CpaCySymSessionCtx pSessionCtx); + +#endif /* LAC_SYM_SESSION_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym.h new file mode 100644 index 000000000000..402fe85378a1 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym.h @@ -0,0 +1,356 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym.h + * + * @defgroup LacSym Symmetric + * + * @ingroup Lac + * + * Symmetric component includes cipher, Hash, chained cipher & hash, + * authenticated encryption and key generation. + * + * @lld_start + * @lld_overview + * + * The symmetric component demuliplexes the following crypto operations to + * the appropriate sub-components: cipher, hash, algorithm chaining and + * authentication encryption. It is a common layer between the above + * mentioned components where common resources are allocated and paramater + * checks are done. The operation specific resource allocation and parameter + * checks are done in the sub-component itself. + * + * The symmetric component demultiplexes the session register/deregister + * and perform functions to the appropriate subcomponents. + * + * @lld_dependencies + * - \ref LacSymPartial "Partial Packet Code": This code manages the partial + * packet state for a session. + * - \ref LacBufferDesc "Common Buffer Code" : This code traverses a buffer + * chain to ensure it is valid. + * - \ref LacSymStats "Statistics": Manages statistics for symmetric + * - \ref LacSymQat "Symmetric QAT": The symmetric qat component is + * initialiased by the symmetric component. + * - \ref LacCipher "Cipher" : demultiplex cipher opertions to this component. + * - \ref LacHash "Hash" : demultiplex hash opertions to this component. + * to this component. + * - \ref LacAlgChain "Algorithm Chaining": The algorithm chaining component + * - OSAL : Memory allocation, Mutex's, atomics + * + * @lld_initialisation + * This component is initialied during the LAC initialisation sequence. It + * initialises the session table, statistics, symmetric QAT, initialises the + * hash definitions lookup table, the hash alg supported lookup table and + * registers a callback function with the symmetric response handler to process + * response messages for Cipher, Hash and Algorithm-Chaining requests. + * + * @lld_module_algorithms + * + * @lld_process_context + * Refer to \ref LacHash "Hash" and \ref LacCipher "Cipher" for sequence + * diagrams from the symmetric component through the sub components. + * + * @lld_end + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_H +#define LAC_SYM_H + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_sym_dp.h" +#include "lac_common.h" +#include "lac_mem_pools.h" +#include "lac_sym_cipher_defs.h" +#include "icp_qat_fw_la.h" + +#define LAC_SYM_KEY_TLS_PREFIX_SIZE 128 +/**< Hash Prefix size in bytes for TLS (128 = MAX = SHA2 (384, 512)*/ + +#define LAC_SYM_OPTIMISED_CD_SIZE 64 +/**< The size of the optimised content desc in DRAM*/ + +#define LAC_SYM_KEY_MAX_HASH_STATE_BUFFER (LAC_SYM_KEY_TLS_PREFIX_SIZE * 2) +/**< hash state prefix buffer structure that holds the maximum sized secret */ + +#define LAC_SYM_HASH_BUFFER_LEN 64 +/**< Buffer length to hold 16 byte MD5 key and 20 byte SHA1 key */ + +/* The ARC4 key will not be stored in the content descriptor so we only need to + * reserve enough space for the next biggest cipher setup block. + * Kasumi needs to store 2 keys and to have the size of 2 blocks for fw*/ +#define LAC_SYM_QAT_MAX_CIPHER_SETUP_BLK_SZ \ + (sizeof(icp_qat_hw_cipher_config_t) + 2 * ICP_QAT_HW_KASUMI_KEY_SZ + \ + 2 * ICP_QAT_HW_KASUMI_BLK_SZ) +/**< @ingroup LacSymQat + * Maximum size for the cipher setup block of the content descriptor */ + +#define LAC_SYM_QAT_MAX_HASH_SETUP_BLK_SZ sizeof(icp_qat_hw_auth_algo_blk_t) +/**< @ingroup LacSymQat + * Maximum size for the hash setup block of the content descriptor */ + +#define LAC_SYM_QAT_CONTENT_DESC_MAX_SIZE \ + LAC_ALIGN_POW2_ROUNDUP(LAC_SYM_QAT_MAX_CIPHER_SETUP_BLK_SZ + \ + LAC_SYM_QAT_MAX_HASH_SETUP_BLK_SZ, \ + (1 << LAC_64BYTE_ALIGNMENT_SHIFT)) +/**< @ingroup LacSymQat + * Maximum size of content descriptor. This is incremented to the next multiple + * of 64 so that it can be 64 byte aligned */ + +#define LAC_SYM_QAT_API_ALIGN_COOKIE_OFFSET \ + (offsetof(CpaCySymDpOpData, instanceHandle)) +/**< @ingroup LacSymQat + * Size which needs to be reserved before the instanceHandle field of + * lac_sym_bulk_cookie_s to align it to the correspondent instanceHandle + * in CpaCySymDpOpData */ + +#define LAC_SIZE_OF_CACHE_HDR_IN_LW 6 +/**< Size of Header part of reqCache/shramReqCache */ + +#define LAC_SIZE_OF_CACHE_MID_IN_LW 2 +/**< Size of Mid part (LW14/15) of reqCache/shramReqCache */ + +#define LAC_SIZE_OF_CACHE_FTR_IN_LW 6 +/**< Size of Footer part of reqCache/shramReqCache */ + +#define LAC_SIZE_OF_CACHE_TO_CLEAR_IN_LW 20 +/**< Size of dummy reqCache/shramReqCache to clear */ + +#define LAC_START_OF_CACHE_MID_IN_LW 14 +/**< Starting LW of reqCache/shramReqCache Mid */ + +#define LAC_START_OF_CACHE_FTR_IN_LW 26 +/**< Starting LW of reqCache/shramReqCache Footer */ + +/** + ******************************************************************************* + * @ingroup LacSym + * Symmetric cookie + * + * @description + * This cookie stores information for a particular symmetric perform op. + * This includes the request params, re-aligned Cipher IV, the request + * message sent to the QAT engine, and various user-supplied parameters + * for the operation which will be needed in our callback function. + * A pointer to this cookie is stored in the opaque data field of the QAT + * message so that it can be accessed in the asynchronous callback. + * Cookies for multiple operations on a given session can be linked + * together to allow queuing of requests using the pNext field. + * + * The parameters are placed in order to match the CpaCySymDpOpData + *structure + *****************************************************************************/ +typedef struct lac_sym_bulk_cookie_s { + + /* CpaCySymDpOpData struct so need to keep this here for correct + * alignment*/ + Cpa8U reserved[LAC_SYM_QAT_API_ALIGN_COOKIE_OFFSET]; + /** NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaInstanceHandle instanceHandle; + /**< Instance handle for the operation */ + CpaCySymSessionCtx sessionCtx; + /**< Session context */ + void *pCallbackTag; + /**< correlator supplied by the client */ + icp_qat_fw_la_bulk_req_t qatMsg; + /**< QAT request message */ + const CpaCySymOpData *pOpData; + /**< pointer to the op data structure that the user supplied in the + * perform + * operation. The op data is modified in the process callback function + * and the pointer is returned to the user in their callback function */ + CpaBoolean updateSessionIvOnSend; + /**< Boolean flag to indicate if the session cipher IV buffer should be + * updated prior to sending the request */ + CpaBoolean updateUserIvOnRecieve; + /**< Boolean flag to indicate if the user's cipher IV buffer should be + * updated after receiving the response from the QAT */ + CpaBoolean updateKeySizeOnRecieve; +/**< Boolean flag to indicate if the cipher key size should be + * updated after receiving the response from the QAT */ + CpaBufferList *pDstBuffer; + /**< Pointer to destination buffer to hold the data output */ + struct lac_sym_bulk_cookie_s *pNext; + /**< Pointer to next node in linked list (if request is queued) */ +} lac_sym_bulk_cookie_t; + +/** +******************************************************************************* +* @ingroup LacSymKey +* symmetric Key cookie +* @description +* This cookie stores information for a particular keygen perform op. +* This includes a hash content descriptor, request params, hash state +* buffer, and various user-supplied parameters for the operation which +* will be needed in our callback function. +* A pointer to this cookie is stored in the opaque data field of the QAT +* message so that it can be accessed in the asynchronous callback. +*****************************************************************************/ +typedef struct lac_sym_key_cookie_s { + CpaInstanceHandle instanceHandle; + /**< QAT device id supplied by the client */ + void *pCallbackTag; + /**< Mechanism used. TLS, SSL or MGF */ + Cpa8U contentDesc[LAC_SYM_QAT_MAX_HASH_SETUP_BLK_SZ]; + /**< Content descriptor. + **< NOTE: Field must be correctly aligned in memory for access by QAT + * engine */ + union { + icp_qat_fw_la_ssl_key_material_input_t sslKeyInput; + /**< SSL key material input structure */ + icp_qat_fw_la_tls_key_material_input_t tlsKeyInput; + /**< TLS key material input structure */ + icp_qat_fw_la_hkdf_key_material_input_t tlsHKDFKeyInput; + /**< TLS HHKDF key material input structure */ + } u; + /**< NOTE: Field must be correctly aligned in memory for access by QAT + * engine */ + Cpa8U hashStateBuffer[LAC_SYM_KEY_MAX_HASH_STATE_BUFFER]; + /**< hash state prefix buffer + * NOTE: Field must be correctly aligned in memory for access by QAT + * engine + */ + CpaCyGenFlatBufCbFunc pKeyGenCb; + /**< callback function supplied by the client */ + void *pKeyGenOpData; + /**< pointer to the (SSL/TLS) or MGF op data structure that the user + * supplied in the perform operation */ + CpaFlatBuffer *pKeyGenOutputData; + /**< Output data pointer supplied by the client */ + Cpa8U hashKeyBuffer[LAC_SYM_HASH_BUFFER_LEN]; + /**< 36 byte buffer to store MD5 key and SHA1 key */ +} lac_sym_key_cookie_t; + +/** +******************************************************************************* +* @ingroup LacSymNrbg +* symmetric NRBG cookie +* @description +* This cookie stores information for a particular NRBG operation. +* This includes various user-supplied parameters for the operation which +* will be needed in our callback function. +* A pointer to this cookie is stored in the opaque data field of the QAT +* message so that it can be accessed in the asynchronous callback. +*****************************************************************************/ +typedef struct lac_sym_nrbg_cookie_s { + CpaInstanceHandle instanceHandle; + /**< QAT device id supplied by the client */ + void *pCallbackTag; + /**< Opaque data supplied by the client */ + icp_qat_fw_la_trng_test_result_t trngHTResult; + /**< TRNG health test result + **< NOTE: Field must be correctly aligned in memory for access by QAT + * engine */ + icp_qat_fw_la_trng_req_t trngReq; + /**< TRNG request message */ + CpaCyGenFlatBufCbFunc pCb; + /**< Callback function supplied by the client */ + void *pOpData; + /**< Op data pointer supplied by the client */ + CpaFlatBuffer *pOutputData; + /**< Output data pointer supplied by the client */ +} lac_sym_nrbg_cookie_t; + +/** +******************************************************************************* +* @ingroup LacSym +* symmetric cookie +* @description +* used to determine the amount of memory to allocate for the symmetric +* cookie pool. As symmetric, random and key generation shared the same +* pool +*****************************************************************************/ +typedef struct lac_sym_cookie_s { + union { + lac_sym_bulk_cookie_t bulkCookie; + /**< symmetric bulk cookie */ + lac_sym_key_cookie_t keyCookie; + /**< symmetric key cookie */ + lac_sym_nrbg_cookie_t nrbgCookie; + /**< symmetric NRBG cookie */ + } u; + Cpa64U keyContentDescPhyAddr; + Cpa64U keyHashStateBufferPhyAddr; + Cpa64U keySslKeyInputPhyAddr; + Cpa64U keyTlsKeyInputPhyAddr; +} lac_sym_cookie_t; + +typedef struct icp_qat_la_auth_req_params_s { + /** equivalent of LW26 of icp_qat_fw_la_auth_req_params_s */ + union { + uint8_t inner_prefix_sz; + /**< Size in bytes of the inner prefix data */ + + uint8_t aad_sz; + /**< Size in bytes of padded AAD data to prefix to the packet + * for CCM + * or GCM processing */ + } u2; + + uint8_t resrvd1; + /**< reserved */ + + uint8_t hash_state_sz; + /**< Number of quad words of inner and outer hash prefix data to process + * Maximum size is 240 */ + + uint8_t auth_res_sz; + /**< Size in bytes of the authentication result */ +} icp_qat_la_auth_req_params_t; + +/* Header (LW's 0 - 5) of struct icp_qat_fw_la_bulk_req_s */ +typedef struct icp_qat_la_bulk_req_hdr_s { + /**< LWs 0-1 */ + icp_qat_fw_comn_req_hdr_t comn_hdr; + /**< Common request header - for Service Command Id, + * use service-specific Crypto Command Id. + * Service Specific Flags - use Symmetric Crypto Command Flags + * (all of cipher, auth, SSL3, TLS and MGF, + * excluding TRNG - field unused) */ + + /**< LWs 2-5 */ + icp_qat_fw_comn_req_hdr_cd_pars_t cd_pars; + /**< Common Request content descriptor field which points either to a + * content descriptor + * parameter block or contains the service-specific data itself. */ +} icp_qat_la_bulk_req_hdr_t; + +/** Footer (LW's 26 - 31) of struct icp_qat_fw_la_bulk_req_s */ +typedef struct icp_qat_la_bulk_req_ftr_s { + /**< LW 0 - equivalent to LW26 of icp_qat_fw_la_bulk_req_t */ + icp_qat_la_auth_req_params_t serv_specif_rqpars; + /**< Common request service-specific parameter field */ + + /**< LW's 1-5, equivalent to LWs 27-31 of icp_qat_fw_la_bulk_req_s */ + icp_qat_fw_comn_req_cd_ctrl_t cd_ctrl; + /**< Common request content descriptor control block - + * this field is service-specific */ +} icp_qat_la_bulk_req_ftr_t; + +/** + *** + ******************************************************************************* + * @ingroup LacSym + * Compile time check of lac_sym_bulk_cookie_t + * + * @description + * Performs a compile time check of lac_sym_bulk_cookie_t to ensure IA + * assumptions are valid. + * + *****************************************************************************/ +void LacSym_CompileTimeAssertions(void); + +void LacDp_WriteRingMsgFull(CpaCySymDpOpData *pRequest, + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg); +void LacDp_WriteRingMsgOpt(CpaCySymDpOpData *pRequest, + icp_qat_fw_la_bulk_req_t *pCurrentQatMsg); + +#endif /* LAC_SYM_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_alg_chain.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_alg_chain.h new file mode 100644 index 000000000000..1750fd0bebf4 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_alg_chain.h @@ -0,0 +1,294 @@ +/*************************************************************************** + * + * <COPYRIGHT_TAG> + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_alg_chain.h + * + * @defgroup LacAlgChain Algorithm Chaining + * + * @ingroup LacSym + * + * Interfaces exposed by the Algorithm Chaining Component + * + * @lld_start + * + * @lld_overview + * This is the LAC Algorithm-Chaining feature component. This component + * implements session registration and cleanup functions, and a perform + * function. Statistics are maintained to track requests issued and completed, + * errors incurred, and authentication verification failures. For each + * function the parameters supplied by the client are checked, and then the + * function proceeds if all the parameters are valid. This component also + * incorporates support for Authenticated-Encryption (CCM and GCM) which + * essentially comprises of a cipher operation and a hash operation combined. + * + * This component can combine a cipher operation with a hash operation or just + * simply create a hash only or cipher only operation and is called from the + * LAC Symmetric API component. In turn it calls the LAC Cipher, LAC Hash, and + * LAC Symmetric QAT components. The goal here is to duplicate as little code + * as possible from the Cipher and Hash components. + * + * The cipher and hash operations can be combined in either order, i.e. cipher + * first then hash or hash first then cipher. The client specifies this via + * the algChainOrder field in the session context. This ordering choice is + * stored as part of the session descriptor, so that it is known when a + * perform request is issued. In the case of Authenticated-Encryption, the + * ordering is an implicit part of the CCM or GCM protocol. + * + * When building a content descriptor, as part of session registration, this + * component asks the Cipher and Hash components to build their respective + * parts of the session descriptor. The key aspect here is to provide the + * correct offsets to the Cipher and Hash components for where in the content + * descriptor to write their Config and Hardware Setup blocks. Also the + * Config block in each case must specify the appropriate next slice. + * + * When building request parameters, as part of a perform operation, this + * component asks the Cipher and Hash components to build their respective + * parts of the request parameters block. Again the key aspect here is to + * provide the correct offsets to the Cipher and Hash components for where in + * the request parameters block to write their parameters. Also the request + * parameters block in each case must specify the appropriate next slice. + * + * Parameter checking for session registration and for operation perform is + * mostly delegated to the Cipher and Hash components. There are a few + * extra checks that this component must perform: check the algChainOrder + * parameter, ensure that CCM/GCM are specified for hash/cipher algorithms + * as appropriate, and ensure that requests are for full packets (partial + * packets are not supported for Algorithm-Chaining). + * + * The perform operation allocates a cookie to capture information required + * in the request callback. This cookie is then freed in the callback. + * + * @lld_dependencies + * - \ref LacCipher "Cipher" : For taking care of the cipher aspects of + * session registration and operation perform + * - \ref LacHash "Hash" : For taking care of the hash aspects of session + * registration and operation perform + * - \ref LacSymCommon "Symmetric Common" : statistics. + * - \ref LacSymQat "Symmetric QAT": To build the QAT request message, + * request param structure, and populate the content descriptor. Also + * for registering a callback function to process the QAT response. + * - \ref QatComms "QAT Comms" : For sending messages to the QAT, and for + * setting the response callback + * - \ref LacMem "Mem" : For memory allocation and freeing, virtual/physical + * address translation, and translating between scalar and pointer types + * - OSAL : For atomics and locking + * + * @lld_module_algorithms + * This component builds up a chain of slices at session init time + * and stores it in the session descriptor. This is used for building up the + * content descriptor at session init time and the request parameters structure + * in the perform operation. + * + * The offsets for the first slice are updated so that the second slice adds + * its configuration information after that of the first slice. The first + * slice also configures the next slice appropriately. + * + * This component is very much hard-coded to just support cipher+hash or + * hash+cipher. It should be quite possible to extend this idea to support + * an arbitrary chain of commands, by building up a command chain that can + * be traversed in order to build up the appropriate configuration for the + * QAT. This notion should be looked at in the future if other forms of + * Algorithm-Chaining are desired. + * + * @lld_process_context + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_ALG_CHAIN_H +#define LAC_SYM_ALG_CHAIN_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "lac_session.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +/* Macro for checking if zero length buffer are supported + * only for cipher is AES-GCM and hash are AES-GCM/AES-GMAC */ +#define IS_ZERO_LENGTH_BUFFER_SUPPORTED(cipherAlgo, hashAlgo) \ + (CPA_CY_SYM_CIPHER_AES_GCM == cipherAlgo && \ + (CPA_CY_SYM_HASH_AES_GMAC == hashAlgo || \ + CPA_CY_SYM_HASH_AES_GCM == hashAlgo)) + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function registers a session for an Algorithm-Chaining operation. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* registered. +* +* @param[in] instanceHandle Instance Handle +* +* @param[in] pSessionCtx Pointer to session context which contains +* parameters which are static for a given +* cryptographic session such as operation type, +* mechanisms, and keys for cipher and/or digest +* operations. +* @param[out] pSessionDesc Pointer to session descriptor +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resources. +* +* @see cpaCySymInitSession() +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionInit(const CpaInstanceHandle instanceHandle, + const CpaCySymSessionSetupData *pSessionCtx, + lac_session_desc_t *pSessionDesc); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* Data path function for the Algorithm-Chaining component +* +* @description +* This function gets called from cpaCySymPerformOp() which is the +* symmetric LAC API function. It is the data path function for the +* Algorithm-Chaining component. It does the parameter checking on the +* client supplied parameters and if the parameters are valid, the +* operation is performed and a request sent to the QAT, otherwise an +* error is returned to the client. +* +* @param[in] instanceHandle Instance Handle +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCallbackTag The application's context for this call +* @param[in] pOpData Pointer to a structure containing request +* parameters. The client code allocates the memory for +* this structure. This component takes ownership of +* the memory until it is returned in the callback. +* +* @param[in] pSrcBuffer Source Buffer List +* @param[out] pDstBuffer Destination Buffer List +* @param[out] pVerifyResult Verify Result +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_RESOURCE Error related to system resource. +* +* @see cpaCySymPerformOp() +* +*****************************************************************************/ +CpaStatus LacAlgChain_Perform(const CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + void *pCallbackTag, + const CpaCySymOpData *pOpData, + const CpaBufferList *pSrcBuffer, + CpaBufferList *pDstBuffer, + CpaBoolean *pVerifyResult); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update cipher key, as specified in provided +* input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCipherKey Pointer to new cipher key. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionCipherKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pCipherKey); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update authentication key, as specified in +* provided input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] pCipherKey Pointer to new authentication key. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionAuthKeyUpdate(lac_session_desc_t *pSessionDesc, + Cpa8U *pAuthKey); + +/** +******************************************************************************* +* @ingroup LacAlgChain +* This function is used to update AAD length as specified in provided +* input. +* +* @description +* This function is called from the LAC session register API function for +* Algorithm-Chaining operations. It validates all input parameters. If +* an invalid parameter is passed, an error is returned to the calling +* function. If all parameters are valid an Algorithm-Chaining session is +* updated. +* +* @threadSafe +* No +* +* @param[in] pSessionDesc Pointer to session descriptor +* @param[in] newAADLength New AAD length. +* +* @retval CPA_STATUS_SUCCESS Function executed successfully. +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RETRY Resubmit the request. +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed in. +* @retval CPA_STATUS_UNSUPPORTED Function is not supported. +* +*****************************************************************************/ +CpaStatus LacAlgChain_SessionAADUpdate(lac_session_desc_t *pSessionDesc, + Cpa32U newAADLength); + +#endif /* LAC_SYM_ALG_CHAIN_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_auth_enc.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_auth_enc.h new file mode 100644 index 000000000000..76e5e53c38a8 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_auth_enc.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_auth_enc.h + * + * @defgroup LacAuthEnc Authenticated Encryption + * + * @ingroup LacSym + * + * @description + * Authenticated encryption specific functionality. + * For CCM related code NIST SP 800-38C is followed. + * For GCM related code NIST SP 800-38D is followed. + * + ***************************************************************************/ +#ifndef LAC_SYM_AUTH_ENC_H_ +#define LAC_SYM_AUTH_ENC_H_ + +/* This define for CCM describes constant sum of n and q */ +#define LAC_ALG_CHAIN_CCM_NQ_CONST 15 + +/* These defines for CCM describe maximum and minimum + * length of nonce in bytes*/ +#define LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MAX 13 +#define LAC_ALG_CHAIN_CCM_N_LEN_IN_BYTES_MIN 7 + +/** + * @ingroup LacAuthEnc + * This function applies any necessary padding to additional authentication data + * pointed by pAdditionalAuthData field of pOpData as described in + * NIST SP 800-38D + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in,out] pAdditionalAuthData Pointer to AAD + * + * @retval CPA_STATUS_SUCCESS Operation finished successfully + * + * @pre pAdditionalAuthData has been param checked + * + */ +void LacSymAlgChain_PrepareGCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData); + +/** + * @ingroup LacAuthEnc + * This function prepares param checks iv and aad for CCM + * + * @param[in,out] pAdditionalAuthData Pointer to AAD + * @param[in,out] pIv Pointer to IV + * @param[in] messageLenToCipherInBytes Size of the message to cipher + * @param[in] ivLenInBytes Size of the IV + * + * @retval CPA_STATUS_SUCCESS Operation finished successfully + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter passed + * + */ +CpaStatus LacSymAlgChain_CheckCCMData(Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes); + +/** + * @ingroup LacAuthEnc + * This function prepares Ctr0 and B0-Bn blocks for CCM algorithm as described + * in NIST SP 800-38C. Ctr0 block is placed in pIv field of pOpData and B0-BN + * blocks are placed in pAdditionalAuthData. + * + * @param[in] pSessionDesc Pointer to the session descriptor + * @param[in,out] pAdditionalAuthData Pointer to AAD + * @param[in,out] pIv Pointer to IV + * @param[in] messageLenToCipherInBytes Size of the message to cipher + * @param[in] ivLenInBytes Size of the IV + * + * @retval none + * + * @pre parameters have been checked using LacSymAlgChain_CheckCCMData() + */ +void LacSymAlgChain_PrepareCCMData(lac_session_desc_t *pSessionDesc, + Cpa8U *pAdditionalAuthData, + Cpa8U *pIv, + Cpa32U messageLenToCipherInBytes, + Cpa32U ivLenInBytes); + +#endif /* LAC_SYM_AUTH_ENC_H_ */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cb.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cb.h new file mode 100644 index 000000000000..5332f8aab510 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cb.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_cb.h + * + * @defgroup LacSymCb Symmetric callback functions + * + * @ingroup LacSym + * + * Functions to assist with callback processing for the symmetric component + ***************************************************************************/ + +#ifndef LAC_SYM_CB_H +#define LAC_SYM_CB_H + +/** + ***************************************************************************** + * @ingroup LacSym + * Dequeue pending requests + * @description + * This function is called by a callback function of a blocking + * operation (either a partial packet or a hash precompute operaion) + * in softIRQ context. It dequeues requests for the following reasons: + * 1. All pre-computes that happened when initialising a session + * have completed. Dequeue any requests that were queued on the + * session while waiting for the precompute operations to complete. + * 2. A partial packet request has completed. Dequeue any partials + * that were queued for this session while waiting for a previous + * partial to complete. + * + * @param[in] pSessionDesc Pointer to the session descriptor + * + * @return CpaStatus + * + ****************************************************************************/ +CpaStatus LacSymCb_PendingReqsDequeue(lac_session_desc_t *pSessionDesc); + +/** + ***************************************************************************** + * @ingroup LacSym + * Register symmetric callback funcion handlers + * + * @description + * This function registers the symmetric callback handler functions with + * the main symmetric callback handler function + * + * @return None + * + ****************************************************************************/ +void LacSymCb_CallbacksRegister(void); + +#endif /* LAC_SYM_CB_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher.h new file mode 100644 index 000000000000..822e9ce03f94 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher.h @@ -0,0 +1,312 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_cipher.h + * + * @defgroup LacCipher Cipher + * + * @ingroup LacSym + * + * API functions of the cipher component + * + * @lld_start + * @lld_overview + * There is a single \ref icp_LacSym "Symmetric LAC API" for hash, cipher, + * auth encryption and algorithm chaining. This API is implemented by the + * \ref LacSym "Symmetric" module. It demultiplexes calls to this API into + * their basic operation and does some common parameter checking and deals + * with accesses to the session table. + * + * The cipher component supports data encryption/decryption using the AES, DES, + * and Triple-DES cipher algorithms, in ECB, CBC and CTR modes. The ARC4 stream + * cipher algorithm is also supported. Data may be provided as a full packet, + * or as a sequence of partial packets. The result of the operation can be + * written back to the source buffer (in-place) or to a seperate output buffer + * (out-of-place). Data must be encapsulated in ICP buffers. + * + * The cipher component is responsible for implementing the cipher-specific + * functionality for registering and de-registering a session, for the perform + * operation and for processing the QAT responses to cipher requests. Statistics + * are maintained for cipher in the symmetric \ref CpaCySymStats64 "stats" + * structure. This module has been seperated out into two. The cipher QAT module + * deals entirely with QAT data structures. The cipher module itself has minimal + * exposure to the QAT data structures. + * + * @lld_dependencies + * - \ref LacCommon + * - \ref LacSymQat "Symmetric QAT": Hash uses the lookup table provided by + * this module to validate user input. Hash also uses this module to build + * the hash QAT request message, request param structure, populate the + * content descriptor, allocate and populate the hash state prefix buffer. + * Hash also registers its function to process the QAT response with this + * module. + * - OSAL : For memory functions, atomics and locking + * + * @lld_module_algorithms + * In general, all the cipher algorithms supported by this component are + * implemented entirely by the QAT. However, in the case of the ARC4 algorithm, + * it was deemed more efficient to carry out some processing on IA. During + * session registration, an initial state is derived from the base key provided + * by the user, using a simple ARC4 Key Scheduling Algorithm (KSA). Then the + * base key is discarded, but the state is maintained for the duration of the + * session. + * + * The ARC4 key scheduling algorithm (KSA) is specified as follows + * (taken from http://en.wikipedia.org/wiki/RC4_(cipher)): + * \code + * for i from 0 to 255 + * S[i] := i + * endfor + * j := 0 + * for i from 0 to 255 + * j := (j + S[i] + key[i mod keylength]) mod 256 + * swap(S[i],S[j]) + * endfor + * \endcode + * + * On registration of a new ARC4 session, the user provides a base key of any + * length from 1 to 256 bytes. This algorithm produces the initial ARC4 state + * (key matrix + i & j index values) from that base key. This ARC4 state is + * used as input for each ARC4 cipher operation in that session, and is updated + * by the QAT after each operation. The ARC4 state is stored in a session + * descriptor, and it's memory is freed when the session is deregistered. + * + * <b>Block Vs. Stream Ciphers</b>\n + * Block ciphers are treated slightly differently than Stream ciphers by this + * cipher component. Supported stream ciphers consist of AES and + * TripleDES algorithms in CTR mode, and ARC4. The 2 primary differences are: + * - Data buffers for block ciphers are required to be a multiple of the + * block size defined for the algorithm (e.g. 8 bytes for DES). For stream + * ciphers, there is no such restriction. + * - For stream ciphers, decryption is performed by setting the QAT hardware + * to encryption mode. + * + * <b>Memory address alignment of data buffers </b>\n + * The QAT requires that most data buffers are aligned on an 8-byte memory + * address boundary (64-byte boundary for optimum performance). For Cipher, + * this applies to the cipher key buffer passed in the Content Descriptor, + * and the IV/State buffer passed in the Request Parameters block in each + * request. Both of these buffers are provided by the user. It does not + * apply to the cipher source/destination data buffers. + * Alignment of the key buffer is ensured because the key is always copied + * from the user provided buffer into a new (aligned) buffer for the QAT + * (the hardware setup block, which configures the QAT slice). This is done + * once only during session registration, and the user's key buffer can be + * effectively discarded after that. + * The IV/State buffer is provided per-request by the user, so it is recommended + * to the user to provide aligned buffers for optimal performance. In the case + * where an unaligned buffer is provided, a new temporary buffer is allocated + * and the user's IV/State data is copied into this buffer. The aligned buffer + * is then passed to the QAT in the request. In the response callback, if the + * IV was updated by the QAT, the contents are copied back to the user's buffer + * and the temporary buffer is freed. + * + * @lld_process_context + * + * Session Register Sequence Diagram: For ARC4 cipher algorithm + * \msc + * APP [label="Application"], SYM [label="Symmetric LAC"], + * Achain [label="Alg chain"], Cipher, SQAT [label="Symmetric QAT"]; + * + * APP=>SYM [ label = "cpaCySymInitSession(cbFunc)", + * URL="\ref cpaCySymInitSession()"] ; + * SYM=>SYM [ label = "LacSymSession_ParamCheck()", + * URL="\ref LacSymSession_ParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_SessionInit()", + * URL="\ref LacAlgChain_SessionInit()"]; + * Achain=>Cipher [ label = "LacCipher_SessionSetupDataCheck()", + * URL="\ref LacCipher_SessionSetupDataCheck()"]; + * Achain<<Cipher [ label="return"]; + * Achain=>SQAT [ label = "LacSymQat_CipherContentDescPopulate()", + * URL="\ref LacSymQat_CipherContentDescPopulate()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>SQAT [ label = "LacSymQat_CipherArc4StateInit()", + * URL="\ref LacSymQat_CipherArc4StateInit()"]; + * Achain<<SQAT [ label="return"]; + * SYM<<Achain [ label = "status" ]; + * SYM=>SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<<SYM [label = "status"]; + * \endmsc + * + * Perform Sequence Diagram: TripleDES CBC-mode encryption, in-place full + *packet, asynchronous mode \msc APP [label="Application"], SYM + *[label="Symmetric LAC"], SC [label="Symmetric Common"], Achain [label="Alg + *chain"], Cipher, SQAT [label="Symmetric QAT"], BUF [label="LAC Buffer Desc"], + *SYMQ [label="Symmetric Queue"], SYMCB [label="Symmetric Callback"], LMP + *[label="LAC Mem Pool"], QATCOMMS [label="QAT Comms"]; + * + * APP=>SYM [ label = "cpaCySymPerformOp()", + * URL="\ref cpaCySymPerformOp()"] ; + * SYM=>SYM [ label = "LacSym_Perform()", + * URL="\ref LacSym_Perform()"]; + * SYM=>SYM [ label = "LacSymPerform_BufferParamCheck()", + * URL="\ref LacSymPerform_BufferParamCheck()"]; + * SYM<<SYM [ label = "status"]; + * SYM=>Achain [ label = "LacAlgChain_Perform()", + * URL="\ref LacCipher()"]; + * Achain=>Cipher [ label = "LacCipher_PerformParamCheck()", + * URL="\ref LacCipher_PerformParamCheck()"]; + * Achain<<Cipher [ label="status"]; + * Achain=>LMP [label="Lac_MemPoolEntryAlloc()", + * URL="\ref Lac_MemPoolEntryAlloc()"]; + * Achain<<LMP [label="return"]; + * Achain=>Cipher [ label = "LacCipher_PerformIvCheckAndAlign()", + * URL="\ref LacCipher_PerformIvCheckAndAlign()"]; + * Achain<<Cipher [ label="status"]; + * Achain=>SQAT [ label = "LacSymQat_CipherRequestParamsPopulate()", + * URL="\ref LacSymQat_CipherRequestParamsPopulate()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>BUF [ label = "LacBuffDesc_BufferListDescWrite()", + * URL = "\ref LacBuffDesc_BufferListDescWrite()"]; + * Achain<<BUF [ label="return"]; + * Achain=>SQAT [ label = "SalQatMsg_CmnMsgAndReqParamsPopulate()", + * URL="\ref SalQatMsg_CmnMsgAndReqParamsPopulate()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>SYMQ [ label = "LacSymQueue_RequestSend()", + * URL="\ref LacSymQueue_RequestSend()"]; + * SYMQ=>QATCOMMS [ label = "QatComms_MsgSend()", + * URL="\ref QatComms_MsgSend()"]; + * SYMQ<<QATCOMMS [ label="status"]; + * Achain<<SYMQ [ label="status"]; + * SYM<<Achain[ label="status"]; + * SYM=>SYM [ label = "LacSym_PartialPacketStateUpdate()", + * URL="\ref LacSym_PartialPacketStateUpdate()"]; + * SYM<<SYM [ label = "return"]; + * SYM=>SC [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYM<<SC [ label="return"]; + * SYM<<SYM [ label = "status"]; + * APP<<SYM [label = "status"]; + * ... [label = "QAT processing the request and generates response"]; + * ...; + * QATCOMMS=>QATCOMMS [label ="QatComms_ResponseMsgHandler()", + * URL="\ref QatComms_ResponseMsgHandler()"]; + * QATCOMMS=>SQAT [label ="LacSymQat_SymRespHandler()", + * URL="\ref LacSymQat_SymRespHandler()"]; + * SQAT=>SYMCB [label="LacSymCb_ProcessCallback()", + * URL="\ref LacSymCb_ProcessCallback()"]; + * SYMCB=>SYMCB [label="LacSymCb_ProcessCallbackInternal()", + * URL="\ref LacSymCb_ProcessCallbackInternal()"]; + * SYMCB=>LMP [label="Lac_MemPoolEntryFree()", + * URL="\ref Lac_MemPoolEntryFree()"]; + * SYMCB<<LMP [label="return"]; + * SYMCB=>SC [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYMCB<<SC [label = "return"]; + * SYMCB=>APP [label="cbFunc"]; + * SYMCB<<APP [label="return"]; + * SQAT<<SYMCB [label="return"]; + * QATCOMMS<<SQAT [label="return"]; + * \endmsc + * + * #See the sequence diagram for cpaCySymInitSession() + * + * @lld_end + * + *****************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_CIPHER_H +#define LAC_SYM_CIPHER_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "lac_session.h" +#include "lac_sym.h" + +/* + * WARNING: There are no checks done on the parameters of the functions in + * this file. The expected values of the parameters are documented and it is + * up to the caller to provide valid values. + */ + +/***************************************************************************/ + +/** + ***************************************************************************** + * @ingroup LacCipher + * Cipher session setup data check + * + * @description + * This function will check any algorithm-specific fields + * in the session cipher setup data structure + * + * @param[in] pCipherSetupData Pointer to session cipher context + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter. + * + *****************************************************************************/ +CpaStatus LacCipher_SessionSetupDataCheck( + const CpaCySymCipherSetupData *pCipherSetupData); + +/** +******************************************************************************* +* @ingroup LacCipher +* Function that checks the perform common parameters for cipher +* +* @description +* This function checks the perform parameters for cipher operations +* +* @param[in] cipherAlgorithm read only pointer to cipher context structure +* +* @param[in] pOpData read only pointer to user-supplied data for this +* cipher operation +* @param[in] packetLen read only length of data in buffer +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter +* +*****************************************************************************/ +CpaStatus LacCipher_PerformParamCheck(CpaCySymCipherAlgorithm cipherAlgorithm, + const CpaCySymOpData *pOpData, + const Cpa64U packetLen); + +/** + ***************************************************************************** + * @ingroup LacCipher + * Cipher perform IV check + * + * @description + * This function will perform algorithm-specific checks on the + * cipher Initialisation Vector data provided by the user. + * + * @param[in] pCbCookie Pointer to struct containing internal cookie + * data for the operation + * @param[in] qatPacketType QAT partial packet type (start/mid/end/none) + * @param[out] ppIvBuffer Returns a pointer to an IV buffer. + * + * + * @retval CPA_STATUS_SUCCESS Function executed successfully. + * @retval CPA_STATUS_FAIL Function failed. + * @retval CPA_STATUS_INVALID_PARAM Invalid parameter. + * + * @see LacCipher_Perform(), LacCipher_IvBufferRestore() + * + * @note LacCipher_IvBufferRestore() must be called when the request is + * completed to update the users IV buffer, only in the case of partial + * packet requests + * + *****************************************************************************/ +CpaStatus LacCipher_PerformIvCheck(sal_service_t *pService, + lac_sym_bulk_cookie_t *pCbCookie, + Cpa32U qatPacketType, + Cpa8U **ppIvBuffer); + +#endif /* LAC_SYM_CIPHER_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher_defs.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher_defs.h new file mode 100644 index 000000000000..4eddb70420da --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_cipher_defs.h @@ -0,0 +1,182 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_cipher_defs.h + * + * @ingroup LacCipher + * + * @description + * This file defines constants for the cipher operations. + * + *****************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_CIPHER_DEFS_H +#define LAC_SYM_CIPHER_DEFS_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +/***************************************************************************/ + +/* + * Constants value for ARC4 algorithm + */ +/* ARC4 algorithm block size */ +#define LAC_CIPHER_ARC4_BLOCK_LEN_BYTES 8 +/* ARC4 key matrix size (bytes) */ +#define LAC_CIPHER_ARC4_KEY_MATRIX_LEN_BYTES 256 +/* ARC4 256 bytes for key matrix, 2 for i and j and 6 bytes for padding */ +#define LAC_CIPHER_ARC4_STATE_LEN_BYTES 264 + +#define LAC_SYM_SNOW3G_CIPHER_CONFIG_FOR_HASH_SZ 40 +/* Snow3g cipher config required for performing a Snow3g hash operation. + * It contains 8 Bytes of config for hardware, 16 Bytes of Key and requires + * 16 Bytes for the IV. + */ + +/* Key Modifier (KM) 4 bytes used in Kasumi algorithm in F8 mode to XOR + * Cipher Key (CK) */ +#define LAC_CIPHER_KASUMI_F8_KEY_MODIFIER_4_BYTES 0x55555555 + +/* The IV length for Kasumi Kgcore is 8 bytes */ +#define LAC_CIPHER_KASUMI_F8_IV_LENGTH 8 + +/* The Counter length for Kasumi Kgcore is 8 bytes */ +#define LAC_CIPHER_KASUMI_F8_COUNTER_LENGTH 8 + +/* The IV length for AES F8 is 16 bytes */ +#define LAC_CIPHER_AES_F8_IV_LENGTH 16 + +/* For Snow3G UEA2, need to make sure last 8 Bytes of IV buffer are + * zero. */ +#define LAC_CIPHER_SNOW3G_UEA2_IV_BUFFER_ZERO_LENGTH 8 + +/* Reserve enough space for max length cipher state + * (can be IV , counter or ARC4 state) */ +#define LAC_CIPHER_STATE_SIZE_MAX LAC_CIPHER_ARC4_STATE_LEN_BYTES + +/* Reserve enough space for max length cipher IV + * (can be A value for Kasumi(passed in as IV), IV or counter, + * but not ARC4 state) */ +#define LAC_CIPHER_IV_SIZE_MAX ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ + +/* 96-bit case of IV for GCM algorithm */ +#define LAC_CIPHER_IV_SIZE_GCM_12 12 + +/* 96-bit case of IV for CCP/GCM single pass algorithm */ +#define LAC_CIPHER_SPC_IV_SIZE 12 +/* + * Constants value for NULL algorithm + */ +/* NULL algorithm block size */ +#define LAC_CIPHER_NULL_BLOCK_LEN_BYTES 8 + +/* Macro to check if the Algorithm is SM4 */ +#define LAC_CIPHER_IS_SM4(algo) \ + ((algo == CPA_CY_SYM_CIPHER_SM4_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_SM4_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_SM4_CTR)) + +/* Macro to check if the Algorithm is CHACHA */ +#define LAC_CIPHER_IS_CHACHA(algo) (algo == CPA_CY_SYM_CIPHER_CHACHA) +/* Macro to check if the Algorithm is AES */ +#define LAC_CIPHER_IS_AES(algo) \ + ((algo == CPA_CY_SYM_CIPHER_AES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_AES_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_AES_CTR) || \ + (algo == CPA_CY_SYM_CIPHER_AES_CCM) || \ + (algo == CPA_CY_SYM_CIPHER_AES_GCM) || \ + (algo == CPA_CY_SYM_CIPHER_AES_XTS)) + +/* Macro to check if the Algorithm is DES */ +#define LAC_CIPHER_IS_DES(algo) \ + ((algo == CPA_CY_SYM_CIPHER_DES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_DES_CBC)) + +/* Macro to check if the Algorithm is Triple DES */ +#define LAC_CIPHER_IS_TRIPLE_DES(algo) \ + ((algo == CPA_CY_SYM_CIPHER_3DES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_3DES_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_3DES_CTR)) + +/* Macro to check if the Algorithm is Kasumi */ +#define LAC_CIPHER_IS_KASUMI(algo) (algo == CPA_CY_SYM_CIPHER_KASUMI_F8) + +/* Macro to check if the Algorithm is Snow3G UEA2 */ +#define LAC_CIPHER_IS_SNOW3G_UEA2(algo) (algo == CPA_CY_SYM_CIPHER_SNOW3G_UEA2) + +/* Macro to check if the Algorithm is ARC4 */ +#define LAC_CIPHER_IS_ARC4(algo) (algo == CPA_CY_SYM_CIPHER_ARC4) + +/* Macro to check if the Algorithm is ZUC EEA3 */ +#define LAC_CIPHER_IS_ZUC_EEA3(algo) (algo == CPA_CY_SYM_CIPHER_ZUC_EEA3) + +/* Macro to check if the Algorithm is NULL */ +#define LAC_CIPHER_IS_NULL(algo) (algo == CPA_CY_SYM_CIPHER_NULL) + +/* Macro to check if the Mode is CTR */ +#define LAC_CIPHER_IS_CTR_MODE(algo) \ + ((algo == CPA_CY_SYM_CIPHER_AES_CTR) || \ + (algo == CPA_CY_SYM_CIPHER_3DES_CTR) || (LAC_CIPHER_IS_CCM(algo)) || \ + (LAC_CIPHER_IS_GCM(algo)) || (LAC_CIPHER_IS_CHACHA(algo)) || \ + (algo == CPA_CY_SYM_CIPHER_SM4_CTR)) + +/* Macro to check if the Algorithm is ECB */ +#define LAC_CIPHER_IS_ECB_MODE(algo) \ + ((algo == CPA_CY_SYM_CIPHER_AES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_DES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_3DES_ECB) || \ + (algo == CPA_CY_SYM_CIPHER_NULL) || \ + (algo == CPA_CY_SYM_CIPHER_SNOW3G_UEA2) || \ + (algo == CPA_CY_SYM_CIPHER_SM4_ECB)) + +/* Macro to check if the Algorithm Mode is F8 */ +#define LAC_CIPHER_IS_F8_MODE(algo) \ + ((algo == CPA_CY_SYM_CIPHER_KASUMI_F8) || \ + (algo == CPA_CY_SYM_CIPHER_AES_F8)) + +/* Macro to check if the Algorithm is CBC */ +#define LAC_CIPHER_IS_CBC_MODE(algo) \ + ((algo == CPA_CY_SYM_CIPHER_AES_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_DES_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_3DES_CBC) || \ + (algo == CPA_CY_SYM_CIPHER_SM4_CBC)) + +/* Macro to check if the Algorithm is CCM */ +#define LAC_CIPHER_IS_CCM(algo) (algo == CPA_CY_SYM_CIPHER_AES_CCM) + +/* Macro to check if the Algorithm is GCM */ +#define LAC_CIPHER_IS_GCM(algo) (algo == CPA_CY_SYM_CIPHER_AES_GCM) + +/* Macro to check if the Algorithm is AES-F8 */ +#define LAC_CIPHER_IS_AES_F8(algo) (algo == CPA_CY_SYM_CIPHER_AES_F8) + +/* Macro to check if the Algorithm Mode is XTS */ +#define LAC_CIPHER_IS_XTS_MODE(algo) (algo == CPA_CY_SYM_CIPHER_AES_XTS) + +/* Macro to check if the Algorithm is single pass */ +#define LAC_CIPHER_IS_SPC(cipher, hash, mask) \ + ((LAC_CIPHER_IS_CHACHA(cipher) && (CPA_CY_SYM_HASH_POLY == hash) && \ + ((mask)&ICP_ACCEL_CAPABILITIES_CHACHA_POLY)) || \ + (LAC_CIPHER_IS_GCM(cipher) && ((CPA_CY_SYM_HASH_AES_GCM == hash) || \ + (CPA_CY_SYM_HASH_AES_GMAC == hash)) && \ + ((mask)&ICP_ACCEL_CAPABILITIES_AESGCM_SPC))) + +#endif /* LAC_CIPHER_DEFS_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash.h new file mode 100644 index 000000000000..b2cd7bcd0b8c --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash.h @@ -0,0 +1,559 @@ +/*************************************************************************** + * + * <COPYRIGHT_TAG> + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_hash.h + * + * @defgroup LacHash Hash + * + * @ingroup LacSym + * + * API functions of the Hash component + * + * @lld_start + * @lld_overview + * There is a single \ref cpaCySym "Symmetric LAC API" for hash, cipher, + * auth encryption and algorithm chaining. This API is implemented by the + * \ref LacSym "Symmetric" module. It demultiplexes calls to this API into + * their basic operation and does some common parameter checking and deals + * with accesses to the session table. + * + * The hash component supports hashing in 3 modes. PLAIN, AUTH and NESTED. + * Plain mode is used to provide data integrity while auth mode is used to + * provide integrity as well as its authenticity. Nested mode is inteded + * for use by non standard HMAC like algorithms such as for the SSL master + * key secret. Partial packets is supported for both plain and auth modes. + * In-place and out-of-place processing is supported for all modes. The + * verify operation is supported for PLAIN and AUTH modes only. + * + * The hash component is responsible for implementing the hash specific + * functionality for initialising a session and for a perform operation. + * Statistics are maintained in the symmetric \ref CpaCySymStats64 "stats" + * structure. This module has been seperated out into two. The hash QAT module + * deals entirely with QAT data structures. The hash module itself has minimal + * exposure to the QAT data structures. + * + * @lld_dependencies + * - \ref LacCommon + * - \ref LacSymQat "Symmetric QAT": Hash uses the lookup table provided by + * this module to validate user input. Hash also uses this module to build + * the hash QAT request message, request param structure, populate the + * content descriptor, allocate and populate the hash state prefix buffer. + * Hash also registers its function to process the QAT response with this + * module. + * - OSAL : For memory functions, atomics and locking + * + * @lld_module_algorithms + * <b>a. HMAC Precomputes</b>\n + * HMAC algorithm is specified as follows: + * \f$ HMAC(msg) = hash((key \oplus opad) \parallel + * hash((key \oplus ipad) \parallel msg ))\f$. + * The key is fixed per session, and is padded up to the block size of the + * algorithm if necessary and xored with the ipad/opad. The following portion + * of the operation can be precomputed: \f$ hash(key \oplus ipad) \f$ as the + * output of this intermediate hash will be the same for every perform + * operation. This intermediate state is the intermediate state of a partial + * partial packet. It is used as the initialiser state to \f$ hash(msg) \f$. + * The same applies to \f$ hash(key \oplus ipad) \f$. There is a saving in + * the data path by the length of time it takes to do two hashes on a block + * size of data. Note: a partial packet operation generates an intermediate + * state. The final operation on a partial packet or when a full packet is + * used applies padding and gives the final hash result. Esentially for the + * inner hash, a partial packet final is issued on the data, using the + * precomputed intermediate state and returns the digest. + * + * For the HMAC precomputes, \ref LacSymHash_HmacPreCompute(), there are two + * hash operations done using a internal content descriptor to configure the + * QAT. A first partial packet is specified as the packet type for the + * pre-computes as we need the state that uses the initialiser constants + * specific to the algorithm. The resulting output is copied from the hash + * state prefix buffer into the QAT content descriptor for the session being + * initialised. The state is used each perform operation as the initialiser + * to the algorithm + * + * <b>b. AES XCBC Precomputes</b>\n + * A similar technique to HMAC will be used to generate the precomputes for + * AES XCBC. In this case a cipher operation will be used to generate the + * precomputed result. The Pre-compute operation involves deriving 3 128-bit + * keys (K1, K2 and K3) from the 128-bit secret key K. + * + * - K1 = 0x01010101010101010101010101010101 encrypted with Key K + * - K2 = 0x02020202020202020202020202020202 encrypted with Key K + * - K3 = 0x03030303030303030303030303030303 encrypted with Key K + * + * A content descriptor is created with the cipher algorithm set to AES + * in ECB mode and with the keysize set to 128 bits. The 3 constants, 16 bytes + * each, are copied into the src buffer and an in-place cipher operation is + * performed on the 48 bytes. ECB mode does not maintain the state, therefore + * the 3 keys can be encrypted in one perform. The encrypted result is used by + * the state2 field in the hash setup block of the content descriptor. + * + * The precompute operations use a different lac command ID and thus have a + * different route in the response path to the symmetric code. In this + * precompute callback function the output of the precompute operation is + * copied into the content descriptor for the session being registered. + * + * <b>c. AES CCM Precomputes</b>\n + * The precomputes for AES CCM are trivial, i.e. there is no need to perform + * a cipher or a digest operation. Instead, the key is stored directly in + * the state2 field. + * + * <b>d. AES GCM Precomputes</b>\n + * As with AES XCBC precomputes, a cipher operation will be used to generate + * the precomputed result for AES GCM. In this case the Galois Hash + * Multiplier (H) must be derived and stored in the state2 field. H is + * derived by encrypting a 16-byte block of zeroes with the + * cipher/authentication key, using AES in ECB mode. + * + * <b>Key size for Auth algorithms</b>\n + * <i>Min Size</i>\n + * RFC 2104 states "The key for HMAC can be of any length. However, less than + * L bytes is strongly discouraged as it would decrease the security strength + * of the function." + * + * FIPS 198a states "The size of the key, K, shall be equal to or greater than + * L/2, where L is the size of the hash function output." + * + * RFC 4434 states "If the key has fewer than 128 bits, lengthen it to exactly + * 128 bits by padding it on the right with zero bits. + * + * A key length of 0 upwards is accepted. It is up to the client to pass in a + * key that complies with the standard they wish to support. + * + * <i>Max Size</i>\n + * RFC 2104 section 2 states : "Applications that use keys longer than B bytes + * will first hash the key using H and then use the resultant L byte string + * as the actual key to HMAC + * + * RFC 4434 section 2 states: + * "If the key is 129 bits or longer, shorten it to exactly 128 bits + * by performing the steps in AES-XCBC-PRF-128 (that is, the + * algorithm described in this document). In that re-application of + * this algorithm, the key is 128 zero bits; the message is the + * too-long current key." + * + * We push this up to the client. They need to do the hash operation through + * the LAC API if the key is greater than the block size of the algorithm. This + * will reduce the key size to the digest size of the algorithm. + * + * RFC 3566 section 4 states: + * AES-XCBC-MAC-96 is a secret key algorithm. For use with either ESP or + * AH a fixed key length of 128-bits MUST be supported. Key lengths + * other than 128-bits MUST NOT be supported (i.e., only 128-bit keys are + * to be used by AES-XCBC-MAC-96). + * + * In this case it is up to the client to provide a key that complies with + * the standards. i.e. exactly 128 bits in size. + * + * + * <b>HMAC-MD5-96 and HMAC-SHA1-96</b>\n + * HMAC-MD5-96 and HMAC-SHA1-96 are defined as requirements by Look Aside + * IPsec. The differences between HMAC-SHA1 and HMAC-SHA1-96 are that the + * digest produced is truncated and there are strict requirements on the + * size of the key that is used. + * + * They are supported in LAC by HMAC-MD5 and HMAC-SHA1. The field + * \ref CpaCySymHashSetupData::digestResultLenInBytes in the LAC API in + * bytes needs to be set to 12 bytes. There are also requirements regarding + * the keysize. It is up to the client to ensure the key size meets the + * requirements of the standards they are using. + * + * RFC 2403: HMAC-MD5-96 Key lengths other than 128-bits MUST NOT be supported. + * HMAC-MD5-96 produces a 128-bit authenticator value. For use with either + * ESP or AH, a truncated value using the first 96 bits MUST be supported. + * + * RFC2404: HMAC-SHA1-96 Key lengths other than 160- bits MUST NOT be supported + * HMAC-SHA-1-96 produces a 160-bit authenticator value. For use with either + * ESP or AH, a truncated value using the first 96 bits MUST be supported. + * + * <b>Out of place operations</b> + * When verify is disabled, the digest will be written to the destination + * buffer. When verify is enabled, the digest calculated is compared to the + * digest stored in the source buffer. + * + * <b>Partial Packets</b> + * Partial packets are handled in the \ref LacSym "Symmetric" component for + * the request. The hash callback function handles the update of the state + * in the callback. + * + * + * @lld_process_context + * + * Session Register Sequence Diagram: For hash modes plain and nested. + * \msc + * APP [label="Application"], SYM [label="Symmetric LAC"], + * Achain [label="Alg chain"], Hash, SQAT [label="Symmetric QAT"]; + * + * APP=>SYM [ label = "cpaCySymInitSession(cbFunc)", + * URL="\ref cpaCySymInitSession()"] ; + * SYM=>SYM [ label = "LacSymSession_ParamCheck()", + * URL="\ref LacSymSession_ParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_SessionInit()", + * URL="\ref LacAlgChain_SessionInit()"]; + * Achain=>Hash [ label = "LacHash_HashContextCheck()", + * URL="\ref LacHash_HashContextCheck()"]; + * Achain<<Hash [ label="return"]; + * Achain=>SQAT [ label = "LacSymQat_HashContentDescInit()", + * URL="\ref LacSymQat_HashContentDescInit()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>Hash [ label = "LacHash_StatePrefixAadBufferInit()", + * URL="\ref LacHash_StatePrefixAadBufferInit()"]; + * Hash=>SQAT [ label = "LacSymQat_HashStatePrefixAadBufferSizeGet()", + * URL="\ref LacSymQat_HashStatePrefixAadBufferSizeGet()"]; + * Hash<<SQAT [ label="return"]; + * Hash=>SQAT [ label = "LacSymQat_HashStatePrefixAadBufferPopulate()", + * URL="\ref LacSymQat_HashStatePrefixAadBufferPopulate()"]; + * Hash<<SQAT [ label="return"]; + * Achain<<Hash [ label="return"]; + * SYM<<Achain [ label = "status" ]; + * SYM=>SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<<SYM [label = "status"]; + * \endmsc + * + * Perform Sequence Diagram: For all 3 modes, full packets and in-place. + * \msc + * APP [label="Application"], SYM [label="Symmetric LAC"], + * Achain [label="Alg chain"], Hash, SQAT [label="Symmetric QAT"], + * QATCOMMS [label="QAT Comms"]; + * + * APP=>SYM [ label = "cpaCySymPerformOp()", + * URL="\ref cpaCySymPerformOp()"] ; + * SYM=>SYM [ label = "LacSymPerform_BufferParamCheck()", + * URL="\ref LacSymPerform_BufferParamCheck()"]; + * SYM=>Achain [ label = "LacAlgChain_Perform()", + * URL="\ref LacAlgChain_Perform()"]; + * Achain=>Achain [ label = "Lac_MemPoolEntryAlloc()", + * URL="\ref Lac_MemPoolEntryAlloc()"]; + * Achain=>SQAT [ label = "LacSymQat_packetTypeGet()", + * URL="\ref LacSymQat_packetTypeGet()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>Achain [ label = "LacBuffDesc_BufferListTotalSizeGet()", + * URL="\ref LacBuffDesc_BufferListTotalSizeGet()"]; + * Achain=>Hash [ label = "LacHash_PerformParamCheck()", + * URL = "\ref LacHash_PerformParamCheck()"]; + * Achain<<Hash [ label="status"]; + * Achain=>SQAT [ label = "LacSymQat_HashRequestParamsPopulate()", + * URL="\ref LacSymQat_HashRequestParamsPopulate()"]; + * Achain<<SQAT [ label="return"]; + * Achain<<SQAT [ label="cmdFlags"]; + * + * Achain=>Achain [ label = "LacBuffDesc_BufferListDescWrite()", + * URL="\ref LacBuffDesc_BufferListDescWrite()"]; + * Achain=>SQAT [ label = "SalQatMsg_CmnMsgAndReqParamsPopulate()", + * URL="\ref SalQatMsg_CmnMsgAndReqParamsPopulate()"]; + * Achain<<SQAT [ label="return"]; + * Achain=>SYM [ label = "LacSymQueue_RequestSend()", + * URL="\ref LacSymQueue_RequestSend()"]; + * SYM=>QATCOMMS [ label = "QatComms_MsgSend()", + * URL="\ref QatComms_MsgSend()"]; + * SYM<<QATCOMMS [ label="status"]; + * Achain<<SYM [ label="status"]; + * SYM<<Achain [ label="status"]; + * SYM=>SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * APP<<SYM [label = "status"]; + * ... [label = "QAT processing the request and generates response. + * Callback in Bottom Half Context"]; + * ...; + * QATCOMMS=>QATCOMMS [label ="QatComms_ResponseMsgHandler()", + * URL="\ref QatComms_ResponseMsgHandler()"]; + * QATCOMMS=>SQAT [label ="LacSymQat_SymRespHandler()", + * URL="\ref LacSymQat_SymRespHandler()"]; + * SQAT=>SYM [label="LacSymCb_ProcessCallback()", + * URL="\ref LacSymCb_ProcessCallback()"]; + * SYM=>SYM [label = "LacSymCb_ProcessCallbackInternal()", + * URL="\ref LacSymCb_ProcessCallbackInternal()"]; + * SYM=>SYM [label = "Lac_MemPoolEntryFree()", + * URL="\ref Lac_MemPoolEntryFree()"]; + * SYM=>SYM [label = "LAC_SYM_STAT_INC", URL="\ref LAC_SYM_STAT_INC"]; + * SYM=>APP [label="cbFunc"]; + * APP>>SYM [label="return"]; + * SYM>>SQAT [label="return"]; + * SQAT>>QATCOMMS [label="return"]; + * \endmsc + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_HASH_H +#define LAC_SYM_HASH_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#include "lac_session.h" +#include "lac_buffer_desc.h" + +/** + ***************************************************************************** + * @ingroup LacHash + * Definition of callback function. + * + * @description + * This is the callback function prototype. The callback function is + * invoked when a hash precompute operation completes. + * + * @param[in] pCallbackTag Opaque value provided by user while making + * individual function call. + * + * @retval + * None + *****************************************************************************/ +typedef void (*lac_hash_precompute_done_cb_t)(void *pCallbackTag); + +/* + * WARNING: There are no checks done on the parameters of the functions in + * this file. The expected values of the parameters are documented and it is + * up to the caller to provide valid values. + */ + +/** +******************************************************************************* +* @ingroup LacHash +* validate the hash context +* +* @description +* The client populates the hash context in the session context structure +* This is passed as parameter to the session register API function and +* needs to be validated. +* +* @param[in] pHashSetupData pointer to hash context structure +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_INVALID_PARAM Invalid parameter +* +*****************************************************************************/ +CpaStatus LacHash_HashContextCheck(CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData); + +/** + ****************************************************************************** + * @ingroup LacHash + * Populate the hash pre-compute data. + * + * @description + * This function populates the state1 and state2 fields with the hash + * pre-computes. This is only done for authentication. The state1 + * and state2 pointers must be set to point to the correct locations + * in the content descriptor where the precompute result(s) will be + * written, before this function is called. + * + * @param[in] instanceHandle Instance Handle + * @param[in] pSessionSetup pointer to session setup data + * @param[in] callbackFn Callback function which is invoked when + * the precompute operation is completed + * @param[in] pCallbackTag Opaque data which is passed back to the user + * as a parameter in the callback function + * @param[out] pWorkingBuffer Pointer to working buffer, sufficient memory + * must be allocated by the caller for this. + * Assumption that this is 8 byte aligned. + * @param[out] pState1 pointer to State 1 in content descriptor + * @param[out] pState2 pointer to State 2 in content descriptor + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_RETRY Retry the operation. + * @retval CPA_STATUS_RESOURCE Error Allocating memory + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacHash_PrecomputeDataCreate(const CpaInstanceHandle instanceHandle, + CpaCySymSessionSetupData *pSessionSetup, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag, + Cpa8U *pWorkingBuffer, + Cpa8U *pState1, + Cpa8U *pState2); + +/** + ****************************************************************************** + * @ingroup LacHash + * populate the hash state prefix aad buffer. + * + * @description + * This function populates the hash state prefix aad buffer. This function + * is not called for CCM/GCM operations as the AAD data varies per request + * and is stored in the cookie as opposed to the session descriptor. + * + * @param[in] pHashSetupData pointer to hash setup structure + * @param[in] pHashControlBlock pointer to hash control block + * @param[in] qatHashMode QAT Mode for hash + * @param[in] pHashStateBuffer pointer to hash state prefix aad buffer + * @param[in] pHashStateBufferInfo Pointer to hash state prefix buffer info + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacHash_StatePrefixAadBufferInit( + sal_service_t *pService, + const CpaCySymHashSetupData *pHashSetupData, + icp_qat_la_bulk_req_ftr_t *pHashControlBlock, + icp_qat_hw_auth_mode_t qatHashMode, + Cpa8U *pHashStateBuffer, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBufferInfo); + +/** +******************************************************************************* +* @ingroup LacHash +* Check parameters for a hash perform operation +* +* @description +* This function checks the parameters for a hash perform operation. +* +* @param[in] pSessionDesc Pointer to session descriptor. +* @param[in] pOpData Pointer to request parameters. +* @param[in] srcPktSize Total size of the Buffer List +* @param[in] pVerifyResult Pointer to user flag +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_INVALID_PARAM Invalid Parameter +* +*****************************************************************************/ +CpaStatus LacHash_PerformParamCheck(CpaInstanceHandle instanceHandle, + lac_session_desc_t *pSessionDesc, + const CpaCySymOpData *pOpData, + Cpa64U srcPktSize, + const CpaBoolean *pVerifyResult); + +/** +******************************************************************************* +* @ingroup LacHash +* Perform hash precompute operation for HMAC +* +* @description +* This function sends 2 requests to the CPM for the hmac precompute +* operations. The results of the ipad and opad state calculation +* is copied into pState1 and pState2 (e.g. these may be the state1 and +* state2 buffers in a hash content desciptor) and when +* the final operation has completed the condition passed as a param to +* this function is set to true. +* +* This function performs the XORing of the IPAD and OPAD constants to +* the key (which was padded to the block size of the algorithm) +* +* @param[in] instanceHandle Instance Handle +* @param[in] hashAlgorithm Hash Algorithm +* @param[in] authKeyLenInBytes Length of Auth Key +* @param[in] pAuthKey Pointer to Auth Key +* @param[out] pWorkingMemory Pointer to working memory that is carved +* up and used in the pre-compute operations. +* Assumption that this is 8 byte aligned. +* @param[out] pState1 Pointer to State 1 in content descriptor +* @param[out] pState2 Pointer to State 2 in content descriptor +* @param[in] callbackFn Callback function which is invoked when +* the precompute operation is completed +* @param[in] pCallbackTag Opaque data which is passed back to the user +* as a parameter in the callback function +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_RETRY Retry the operation. +* @retval CPA_STATUS_FAIL Operation Failed +* +*****************************************************************************/ +CpaStatus LacSymHash_HmacPreComputes(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState1, + Cpa8U *pState2, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag); + +/** +******************************************************************************* + * @ingroup LacHash + * Perform hash precompute operation for XCBC MAC and GCM + * + * @description + * This function sends 1 request to the CPM for the precompute operation + * based on an AES ECB cipher. The results of the calculation is copied + * into pState (this may be a pointer to the State 2 buffer in a Hash + * content descriptor) and when the operation has completed the condition + * passed as a param to this function is set to true. + * + * @param[in] instanceHandle Instance Handle + * @param[in] hashAlgorithm Hash Algorithm + * @param[in] authKeyLenInBytes Length of Auth Key + * @param[in] pAuthKey Auth Key + * @param[out] pWorkingMemory Pointer to working memory that is carved + * up and used in the pre-compute operations. + * Assumption that this is 8 byte aligned. + * @param[out] pState Pointer to output state + * @param[in] callbackFn Callback function which is invoked when + * the precompute operation is completed + * @param[in] pCallbackTag Opaque data which is passed back to the user + * as a parameter in the callback function + + * + * @retval CPA_STATUS_SUCCESS Success + * @retval CPA_STATUS_RETRY Retry the operation. + * @retval CPA_STATUS_FAIL Operation Failed + * + *****************************************************************************/ +CpaStatus LacSymHash_AesECBPreCompute(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + Cpa32U authKeyLenInBytes, + Cpa8U *pAuthKey, + Cpa8U *pWorkingMemory, + Cpa8U *pState, + lac_hash_precompute_done_cb_t callbackFn, + void *pCallbackTag); + +/** +******************************************************************************* +* @ingroup LacHash +* initialise data structures for the hash precompute operations +* +* @description +* This function registers the precompute callback handler function, which +* is different to the default one used by symmetric. Content desciptors +* are preallocted for the hmac precomputes as they are constant for these +* operations. +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_RESOURCE Error allocating memory +* +*****************************************************************************/ +CpaStatus LacSymHash_HmacPrecompInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacHash +* free resources allocated for the precompute operations +* +* @description +* free up the memory allocated on init time for the content descriptors +* that were allocated for the HMAC precompute operations. +* +* @return none +* +*****************************************************************************/ +void LacSymHash_HmacPrecompShutdown(CpaInstanceHandle instanceHandle); + +void LacSync_GenBufListVerifyCb(void *pCallbackTag, + CpaStatus status, + CpaCySymOp operationType, + void *pOpData, + CpaBufferList *pDstBuffer, + CpaBoolean opResult); + +#endif /* LAC_SYM_HASH_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_defs.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_defs.h new file mode 100644 index 000000000000..e95b0efb5b0e --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_defs.h @@ -0,0 +1,344 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash_defs.h + * + * @defgroup LacHashDefs Hash Definitions + * + * @ingroup LacHash + * + * Constants for hash algorithms + * + ***************************************************************************/ + +#ifndef LAC_SYM_HASH_DEFS_H +#define LAC_SYM_HASH_DEFS_H + +/* Constant for MD5 algorithm */ +#define LAC_HASH_MD5_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * MD5 block size in bytes */ +#define LAC_HASH_MD5_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * MD5 digest length in bytes */ +#define LAC_HASH_MD5_STATE_SIZE 16 +/**< @ingroup LacHashDefs + * MD5 state size */ + +/* Constants for SHA1 algorithm */ +#define LAC_HASH_SHA1_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA1 Block size in bytes */ +#define LAC_HASH_SHA1_DIGEST_SIZE 20 +/**< @ingroup LacHashDefs + * SHA1 digest length in bytes */ +#define LAC_HASH_SHA1_STATE_SIZE 20 +/**< @ingroup LacHashDefs + * SHA1 state size */ + +/* Constants for SHA224 algorithm */ +#define LAC_HASH_SHA224_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA224 block size in bytes */ +#define LAC_HASH_SHA224_DIGEST_SIZE 28 +/**< @ingroup LacHashDefs + * SHA224 digest length in bytes */ +#define LAC_HASH_SHA224_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA224 state size */ + +/* Constants for SHA256 algorithm */ +#define LAC_HASH_SHA256_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SHA256 block size in bytes */ +#define LAC_HASH_SHA256_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SHA256 digest length */ +#define LAC_HASH_SHA256_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA256 state size */ + +/* Constants for SHA384 algorithm */ +#define LAC_HASH_SHA384_BLOCK_SIZE 128 +/**< @ingroup LacHashDefs + * SHA384 block size in bytes */ +#define LAC_HASH_SHA384_DIGEST_SIZE 48 +/**< @ingroup LacHashDefs + * SHA384 digest length in bytes */ +#define LAC_HASH_SHA384_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * SHA384 state size */ + +/* Constants for SHA512 algorithm */ +#define LAC_HASH_SHA512_BLOCK_SIZE 128 +/**< @ingroup LacHashDefs + * SHA512 block size in bytes */ +#define LAC_HASH_SHA512_DIGEST_SIZE 64 +/**< @ingroup LacHashDefs + * SHA512 digest length in bytes */ +#define LAC_HASH_SHA512_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * SHA512 state size */ + +/* Constants for SHA3_224 algorithm */ +#define LAC_HASH_SHA3_224_BLOCK_SIZE 144 +/**< @ingroup LacHashDefs + * SHA3_224 block size in bytes */ +#define LAC_HASH_SHA3_224_DIGEST_SIZE 28 +/**< @ingroup LacHashDefs + * SHA3_224 digest length in bytes */ +#define LAC_HASH_SHA3_224_STATE_SIZE 28 +/**< @ingroup LacHashDefs + * SHA3_224 state size */ + +/* Constants for SHA3_256 algorithm */ +#define LAC_HASH_SHA3_256_BLOCK_SIZE 136 +/**< @ingroup LacHashDefs + * SHA3_256 block size in bytes */ +#define LAC_HASH_SHA3_256_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SHA3_256 digest length in bytes */ +#define LAC_HASH_SHA3_256_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SHA3_256 state size */ + +/* Constants for SHA3_384 algorithm */ +#define LAC_HASH_SHA3_384_BLOCK_SIZE 104 +/**< @ingroup LacHashDefs + * * SHA3_384 block size in bytes */ +#define LAC_HASH_SHA3_384_DIGEST_SIZE 48 +/**< @ingroup LacHashDefs + * * SHA3_384 digest length in bytes */ +#define LAC_HASH_SHA3_384_STATE_SIZE 48 +/**< @ingroup LacHashDefs + * * SHA3_384 state size */ + +/* Constants for SHA3_512 algorithm */ +#define LAC_HASH_SHA3_512_BLOCK_SIZE 72 +/**< @ingroup LacHashDefs + * * * SHA3_512 block size in bytes */ +#define LAC_HASH_SHA3_512_DIGEST_SIZE 64 +/**< @ingroup LacHashDefs + * * * SHA3_512 digest length in bytes */ +#define LAC_HASH_SHA3_512_STATE_SIZE 64 +/**< @ingroup LacHashDefs + * * * SHA3_512 state size */ + +/* Constants for SHAKE_128 algorithm */ +#define LAC_HASH_SHAKE_128_BLOCK_SIZE 168 +/**< @ingroup LacHashDefs + * * * SHAKE_128 block size in bytes */ +#define LAC_HASH_SHAKE_128_DIGEST_SIZE 0xFFFFFFFF +/**< @ingroup LacHashDefs + * * * SHAKE_128 digest length in bytes ((2^32)-1)*/ + +/* Constants for SHAKE_256 algorithm */ +#define LAC_HASH_SHAKE_256_BLOCK_SIZE 136 +/**< @ingroup LacHashDefs + * * * SHAKE_256 block size in bytes */ +#define LAC_HASH_SHAKE_256_DIGEST_SIZE 0xFFFFFFFF +/**< @ingroup LacHashDefs + * * * SHAKE_256 digest length in bytes ((2^ 32)-1)*/ + +/* Constants for POLY algorithm */ +#define LAC_HASH_POLY_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * POLY block size in bytes */ +#define LAC_HASH_POLY_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * POLY digest length */ +#define LAC_HASH_POLY_STATE_SIZE 0 +/**< @ingroup LacHashDefs + * POLY state size */ + +/* Constants for SM3 algorithm */ +#define LAC_HASH_SM3_BLOCK_SIZE 64 +/**< @ingroup LacHashDefs + * SM3 block size in bytes */ +#define LAC_HASH_SM3_DIGEST_SIZE 32 +/**< @ingroup LacHashDefs + * SM3 digest length */ +#define LAC_HASH_SM3_STATE_SIZE 32 +/**< @ingroup LacHashDefs + * SM3 state size */ + +/* Constants for XCBC precompute algorithm */ +#define LAC_HASH_XCBC_PRECOMP_KEY_NUM 3 +/**< @ingroup LacHashDefs + * The Pre-compute operation involves deriving 3 128-bit + * keys (K1, K2 and K3) */ + +/* Constants for XCBC MAC algorithm */ +#define LAC_HASH_XCBC_MAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * XCBC_MAC block size in bytes */ +#define LAC_HASH_XCBC_MAC_128_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * XCBC_MAC_PRF_128 digest length in bytes */ + +/* Constants for AES CMAC algorithm */ +#define LAC_HASH_CMAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * AES CMAC block size in bytes */ +#define LAC_HASH_CMAC_128_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * AES CMAC digest length in bytes */ + +/* constants for AES CCM */ +#define LAC_HASH_AES_CCM_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * block size for CBC-MAC part of CCM */ +#define LAC_HASH_AES_CCM_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * untruncated size of authentication field */ + +/* constants for AES GCM */ +#define LAC_HASH_AES_GCM_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * block size for Galois Hash 128 part of CCM */ +#define LAC_HASH_AES_GCM_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * untruncated size of authentication field */ + +/* constants for KASUMI F9 */ +#define LAC_HASH_KASUMI_F9_BLOCK_SIZE 8 +/**< @ingroup LacHashDefs + * KASUMI_F9 block size in bytes */ +#define LAC_HASH_KASUMI_F9_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * KASUMI_F9 digest size in bytes */ + +/* constants for SNOW3G UIA2 */ +#define LAC_HASH_SNOW3G_UIA2_BLOCK_SIZE 8 +/**< @ingroup LacHashDefs + * SNOW3G UIA2 block size in bytes */ +#define LAC_HASH_SNOW3G_UIA2_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * SNOW3G UIA2 digest size in bytes */ + +/* constants for AES CBC MAC */ +#define LAC_HASH_AES_CBC_MAC_BLOCK_SIZE 16 +/**< @ingroup LacHashDefs + * AES CBC MAC block size in bytes */ +#define LAC_HASH_AES_CBC_MAC_DIGEST_SIZE 16 +/**< @ingroup LacHashDefs + * AES CBC MAC digest size in bytes */ + +#define LAC_HASH_ZUC_EIA3_BLOCK_SIZE 4 +/**< @ingroup LacHashDefs + * ZUC EIA3 block size in bytes */ +#define LAC_HASH_ZUC_EIA3_DIGEST_SIZE 4 +/**< @ingroup LacHashDefs + * ZUC EIA3 digest size in bytes */ + +/* constants for AES GCM ICV allowed sizes */ +#define LAC_HASH_AES_GCM_ICV_SIZE_8 8 +#define LAC_HASH_AES_GCM_ICV_SIZE_12 12 +#define LAC_HASH_AES_GCM_ICV_SIZE_16 16 + +/* constants for AES CCM ICV allowed sizes */ +#define LAC_HASH_AES_CCM_ICV_SIZE_MIN 4 +#define LAC_HASH_AES_CCM_ICV_SIZE_MAX 16 + +/* constants for authentication algorithms */ +#define LAC_HASH_IPAD_BYTE 0x36 +/**< @ingroup LacHashDefs + * Ipad Byte */ +#define LAC_HASH_OPAD_BYTE 0x5c +/**< @ingroup LacHashDefs + * Opad Byte */ + +#define LAC_HASH_IPAD_4_BYTES 0x36363636 +/**< @ingroup LacHashDefs + * Ipad for 4 Bytes */ +#define LAC_HASH_OPAD_4_BYTES 0x5c5c5c5c +/**< @ingroup LacHashDefs + * Opad for 4 Bytes */ + +/* Key Modifier (KM) value used in Kasumi algorithm in F9 mode to XOR + * Integrity Key (IK) */ +#define LAC_HASH_KASUMI_F9_KEY_MODIFIER_4_BYTES 0xAAAAAAAA +/**< @ingroup LacHashDefs + * Kasumi F9 Key Modifier for 4 bytes */ + +#define LAC_SYM_QAT_HASH_IV_REQ_MAX_SIZE_QW 2 +/**< @ingroup LacSymQatHash + * Maximum size of IV embedded in the request. + * This is set to 2, namely 4 LONGWORDS. */ + +#define LAC_SYM_QAT_HASH_STATE1_MAX_SIZE_BYTES LAC_HASH_SHA512_BLOCK_SIZE +/**< @ingroup LacSymQatHash + * Maximum size of state1 in the hash setup block of the content descriptor. + * This is set to the block size of SHA512. */ + +#define LAC_SYM_QAT_HASH_STATE2_MAX_SIZE_BYTES LAC_HASH_SHA512_BLOCK_SIZE +/**< @ingroup LacSymQatHash + * Maximum size of state2 in the hash setup block of the content descriptor. + * This is set to the block size of SHA512. */ + +#define LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES 255 +/**< Maximum size of the inner and outer prefix for nested hashing operations. + * This is got from the maximum size supported by the accelerator which stores + * the size in an 8bit field */ + +#define LAC_MAX_HASH_STATE_STORAGE_SIZE \ + (sizeof(icp_qat_hw_auth_counter_t) + LAC_HASH_SHA512_STATE_SIZE) +/**< Maximum size of the hash state storage section of the hash state prefix + * buffer */ + +#define LAC_MAX_HASH_STATE_BUFFER_SIZE_BYTES \ + LAC_MAX_HASH_STATE_STORAGE_SIZE + \ + (LAC_ALIGN_POW2_ROUNDUP(LAC_MAX_INNER_OUTER_PREFIX_SIZE_BYTES, \ + LAC_QUAD_WORD_IN_BYTES) * \ + 2) +/**< Maximum size of the hash state prefix buffer will be for nested hash when + * there is the maximum sized inner prefix and outer prefix */ + +#define LAC_MAX_AAD_SIZE_BYTES 256 +/**< Maximum size of AAD in bytes */ + +#define IS_HMAC_ALG(algorithm) \ + ((algorithm == CPA_CY_SYM_HASH_MD5) || \ + (algorithm == CPA_CY_SYM_HASH_SHA1) || \ + (algorithm == CPA_CY_SYM_HASH_SHA224) || \ + (algorithm == CPA_CY_SYM_HASH_SHA256) || \ + (algorithm == CPA_CY_SYM_HASH_SHA384) || \ + (algorithm == CPA_CY_SYM_HASH_SHA512) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_224) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_256) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_384) || \ + (algorithm == CPA_CY_SYM_HASH_SHA3_512) || \ + (algorithm == CPA_CY_SYM_HASH_SM3)) +/**< @ingroup LacSymQatHash + * Macro to detect if the hash algorithm is a HMAC algorithm */ + +#define IS_HASH_MODE_1(qatHashMode) (ICP_QAT_HW_AUTH_MODE1 == qatHashMode) +/**< @ingroup LacSymQatHash + * Macro to detect is qat hash mode is set to 1 (precompute mode) + * only used with algorithms in hash mode CPA_CY_SYM_HASH_MODE_AUTH */ + +#define IS_HASH_MODE_2(qatHashMode) (ICP_QAT_HW_AUTH_MODE2 == qatHashMode) +/**< @ingroup LacSymQatHash + * Macro to detect is qat hash mode is set to 2. This is used for TLS and + * mode 2 HMAC (no preompute mode) */ + +#define IS_HASH_MODE_2_AUTH(qatHashMode, hashMode) \ + ((IS_HASH_MODE_2(qatHashMode)) && \ + (CPA_CY_SYM_HASH_MODE_AUTH == hashMode)) +/**< @ingroup LacSymQatHash + * Macro to check for qat hash mode is set to 2 and the hash mode is + * Auth. This applies to HMAC algorithms (no pre compute). This is used + * to differntiate between TLS and HMAC */ + +#define IS_HASH_MODE_2_NESTED(qatHashMode, hashMode) \ + ((IS_HASH_MODE_2(qatHashMode)) && \ + (CPA_CY_SYM_HASH_MODE_NESTED == hashMode)) +/**< @ingroup LacSymQatHash + * Macro to check for qat hash mode is set to 2 and the LAC hash mode is + * Nested. This applies to TLS. This is used to differentiate between + * TLS and HMAC */ + +#endif /* LAC_SYM_HASH_DEFS_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_precomputes.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_precomputes.h new file mode 100644 index 000000000000..6fd93cc28175 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_hash_precomputes.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_hash_precomputes.h + * + * @defgroup LacHashDefs Hash Definitions + * + * @ingroup LacHash + * + * Constants for hash algorithms + * + ***************************************************************************/ +#ifndef LAC_SYM_HASH_PRECOMPUTES_H +#define LAC_SYM_HASH_PRECOMPUTES_H + +#include "lac_sym_hash.h" + +#define LAC_SYM_AES_CMAC_RB_128 0x87 /* constant used for */ + /* CMAC calculation */ + +#define LAC_SYM_HASH_MSBIT_MASK 0x80 /* Mask to check MSB top bit */ + /* zero or one */ + +#define LAC_SINGLE_BUFFER_HW_META_SIZE \ + (sizeof(icp_buffer_list_desc_t) + sizeof(icp_flat_buffer_desc_t)) +/**< size of memory to allocate for the HW buffer list that is sent to the + * QAT */ + +#define LAC_SYM_HASH_PRECOMP_MAX_WORKING_BUFFER \ + ((sizeof(lac_sym_hash_precomp_op_data_t) * 2) + \ + sizeof(lac_sym_hash_precomp_op_t)) +/**< maximum size of the working data for the HMAC precompute operations + * + * Maximum size of lac_sym_hash_precomp_op_data_t is 264 bytes. For hash + * precomputes there are 2 of these structrues and a further + * lac_sym_hash_precomp_op_t structure required. This comes to a total of 536 + * bytes. + * For the asynchronous version of the precomputes, the memory for the hash + * state prefix buffer is used as the working memory. There are 584 bytes + * which are alloacted for the hash state prefix buffer which is enough to + * carve up for the precomputes. + */ + +#define LAC_SYM_HASH_PRECOMP_MAX_AES_ECB_DATA \ + ((ICP_QAT_HW_AES_128_KEY_SZ) * (3)) +/**< Maximum size for the data that an AES ECB precompute is generated on */ + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * Precompute type enum + * @description + * Enum used to distinguish between precompute types + * + *****************************************************************************/ +typedef enum { + LAC_SYM_HASH_PRECOMP_HMAC = 1, + /**< Hmac precompute operation. Copy state from hash state buffer */ + LAC_SYM_HASH_PRECOMP_AES_ECB, + /**< XCBC/CGM precompute, Copy state from data buffer */ +} lac_sym_hash_precomp_type_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * overall precompute management structure + * @description + * structure used to manage the precompute operations for a session + * + *****************************************************************************/ +typedef struct lac_sym_hash_precomp_op_s { + lac_hash_precompute_done_cb_t callbackFn; + /**< Callback function to be invoked when the final precompute completes + */ + + void *pCallbackTag; + /**< Opaque data to be passed back as a parameter in the callback */ + + QatUtilsAtomic opsPending; + /**< counter used to determine if the current precompute is the + * final one. */ + +} lac_sym_hash_precomp_op_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * hmac precompute structure as used by the QAT + * @description + * data used by the QAT for HMAC precomputes + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_hmac_precomp_qat_s { + Cpa8U data[LAC_HASH_SHA512_BLOCK_SIZE]; + /**< data to be hashed - block size of data for the algorithm */ + /* NOTE: to save space we could have got the QAT to overwrite + * this with the hash state storage */ + icp_qat_fw_la_auth_req_params_t hashReqParams; + /**< Request parameters as read in by the QAT */ + Cpa8U bufferDesc[LAC_SINGLE_BUFFER_HW_META_SIZE]; + /**< Buffer descriptor structure */ + Cpa8U hashStateStorage[LAC_MAX_HASH_STATE_STORAGE_SIZE]; + /**< Internal buffer where QAT writes the intermediate partial + * state that is used in the precompute */ +} lac_sym_hash_hmac_precomp_qat_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * AES ECB precompute structure as used by the QAT + * @description + * data used by the QAT for AES ECB precomptes + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_aes_precomp_qat_s { + Cpa8U contentDesc[LAC_SYM_QAT_MAX_CIPHER_SETUP_BLK_SZ]; + /**< Content descriptor for a cipher operation */ + Cpa8U data[LAC_SYM_HASH_PRECOMP_MAX_AES_ECB_DATA]; + /**< The data to be ciphered is conatined here and the result is + * written in place back into this buffer */ + icp_qat_fw_la_cipher_req_params_t cipherReqParams; + /**< Request parameters as read in by the QAT */ + Cpa8U bufferDesc[LAC_SINGLE_BUFFER_HW_META_SIZE]; + /**< Buffer descriptor structure */ +} lac_sym_hash_aes_precomp_qat_t; + +/** + ***************************************************************************** + * @ingroup LacHashDefs + * overall structure for managing a single precompute operation + * @description + * overall structure for managing a single precompute operation + * + * Must be allocated on an 8-byte aligned memory address. + * + *****************************************************************************/ +typedef struct lac_sym_hash_precomp_op_data_s { + sal_crypto_service_t *pInstance; + /**< Instance handle for the operation */ + Cpa8U reserved[4]; + /**< padding to align later structures on minimum 8-Byte address */ + lac_sym_hash_precomp_type_t opType; + /**< operation type to determine the precompute type in the callback */ + lac_sym_hash_precomp_op_t *pOpStatus; + /**< structure containing the counter and the condition for the overall + * precompute operation. This is a pointer because the memory structure + * may be shared between precomputes when there are more than 1 as in + * the + * case of HMAC */ + union { + lac_sym_hash_hmac_precomp_qat_t hmacQatData; + /**< Data sent to the QAT for hmac precomputes */ + lac_sym_hash_aes_precomp_qat_t aesQatData; + /**< Data sent to the QAT for AES ECB precomputes */ + } u; + + /**< ASSUMPTION: The above structures are 8 byte aligned if the overall + * struct is 8 byte aligned, as there are two 4 byte fields before this + * union */ + Cpa32U stateSize; + /**< Size of the state to be copied into the state pointer in the + * content + * descriptor */ + Cpa8U *pState; + /**< pointer to the state in the content descriptor where the result of + * the precompute should be copied to */ +} lac_sym_hash_precomp_op_data_t; + +#endif /* LAC_SYM_HASH_PRECOMPUTES_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_key.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_key.h new file mode 100644 index 000000000000..bae0d8faabc7 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_key.h @@ -0,0 +1,184 @@ +/*************************************************************************** + * + * <COPYRIGHT_TAG> + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_key.h + * + * @defgroup LacSymKey Key Generation + * + * @ingroup LacSym + * + * @lld_start + * + * @lld_overview + * + * Key generation component is reponsible for SSL, TLS & MGF operations. All + * memory required for the keygen operations is got from the keygen cookie + * structure which is carved up as required. + * + * For SSL the QAT accelerates the nested hash function with MD5 as the + * outer hash and SHA1 as the inner hash. + * + * Refer to sections in draft-freier-ssl-version3-02.txt: + * 6.1 Asymmetric cryptographic computations - This refers to coverting + * the pre master secret to the master secret. + * 6.2.2 Converting the master secret into keys and MAC secrets - Using + * the master secret to generate the key material. + * + * For TLS the QAT accelerates the PRF function as described in + * rfc4346 - TLS version 1.1 (this obsoletes rfc2246 - TLS version 1.0) + * 5. HMAC and the pseudorandom function - For the TLS PRF and getting + * S1 and S2 from the secret. + * 6.3. Key calculation - For how the key material is generated + * 7.4.9. Finished - How the finished message uses the TLS PRF + * 8.1. Computing the master secret + * + * + * @lld_dependencies + * \ref LacSymQatHash: for building up hash content descriptor + * \ref LacMem: for virt to phys coversions + * + * @lld_initialisation + * The reponse handler is registered with Symmetric. The Maximum SSL is + * allocated. A structure is allocated containing all the TLS lables that + * are supported. On shutdown the memory for these structures are freed. + * + * @lld_module_algorithms + * @lld_process_context + * + * @lld_end + * + * + *****************************************************************************/ +#ifndef LAC_SYM_KEY_H_ +#define LAC_SYM_KEY_H_ + +#include "icp_qat_fw_la.h" +#include "cpa_cy_key.h" + +/**< @ingroup LacSymKey + * Label for SSL. Size is 136 bytes for 16 iterations, which can theroretically + * generate up to 256 bytes of output data. QAT will generate a maximum of + * 255 bytes */ + +#define LAC_SYM_KEY_TLS_MASTER_SECRET_LABEL ("master secret") +/**< @ingroup LacSymKey + * Label for TLS Master Secret Key Derivation, as defined in RFC4346 */ + +#define LAC_SYM_KEY_TLS_KEY_MATERIAL_LABEL ("key expansion") +/**< @ingroup LacSymKey + * Label for TLS Key Material Generation, as defined in RFC4346. */ + +#define LAC_SYM_KEY_TLS_CLIENT_FIN_LABEL ("client finished") +/**< @ingroup LacSymKey + * Label for TLS Client finished Message, as defined in RFC4346. */ + +#define LAC_SYM_KEY_TLS_SERVER_FIN_LABEL ("server finished") +/**< @ingroup LacSymKey + * Label for TLS Server finished Message, as defined in RFC4346. */ + +/* +******************************************************************************* +* Define Constants and Macros for SSL, TLS and MGF +******************************************************************************* +*/ + +#define LAC_SYM_KEY_NO_HASH_BLK_OFFSET_QW 0 +/**< Used to indicate there is no hash block offset in the content descriptor + */ + +/* +******************************************************************************* +* Define Constant lengths for HKDF TLS v1.3 sublabels. +******************************************************************************* +*/ +#define HKDF_SUB_LABEL_KEY_LENGTH ((Cpa8U)13) +#define HKDF_SUB_LABEL_IV_LENGTH ((Cpa8U)12) +#define HKDF_SUB_LABEL_RESUMPTION_LENGTH ((Cpa8U)20) +#define HKDF_SUB_LABEL_FINISHED_LENGTH ((Cpa8U)18) +#define HKDF_SUB_LABELS_ALL \ + (CPA_CY_HKDF_SUBLABEL_KEY | CPA_CY_HKDF_SUBLABEL_IV | \ + CPA_CY_HKDF_SUBLABEL_RESUMPTION | CPA_CY_HKDF_SUBLABEL_FINISHED) +#define LAC_KEY_HKDF_SUBLABELS_NUM 4 +#define LAC_KEY_HKDF_DIGESTS 0 +#define LAC_KEY_HKDF_CIPHERS_MAX (CPA_CY_HKDF_TLS_AES_128_CCM_8_SHA256 + 1) +#define LAC_KEY_HKDF_SUBLABELS_MAX (LAC_KEY_HKDF_SUBLABELS_NUM + 1) + +/** + ****************************************************************************** + * @ingroup LacSymKey + * TLS label struct + * + * @description + * This structure is used to hold the various TLS labels. Each field is + * on an 8 byte boundary provided the structure itslef is 8 bytes aligned. + *****************************************************************************/ +typedef struct lac_sym_key_tls_labels_s { + Cpa8U masterSecret[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< Master secret label */ + Cpa8U keyMaterial[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< Key material label */ + Cpa8U clientFinished[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< client finished label */ + Cpa8U serverFinished[ICP_QAT_FW_LA_TLS_LABEL_LEN_MAX]; + /**< server finished label */ +} lac_sym_key_tls_labels_t; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * TLS HKDF sub label struct + * + * @description + * This structure is used to hold the various TLS HKDF sub labels. + * Each field is on an 8 byte boundary. + *****************************************************************************/ +typedef struct lac_sym_key_tls_hkdf_sub_labels_s { + CpaCyKeyGenHKDFExpandLabel keySublabel256; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabel256; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + CpaCyKeyGenHKDFExpandLabel keySublabel384; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabel384; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + CpaCyKeyGenHKDFExpandLabel keySublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_KEY */ + CpaCyKeyGenHKDFExpandLabel ivSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_IV */ + CpaCyKeyGenHKDFExpandLabel resumptionSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_RESUMPTION */ + CpaCyKeyGenHKDFExpandLabel finishedSublabelChaChaPoly; + /**< CPA_CY_HKDF_SUBLABEL_FINISHED */ + Cpa64U sublabelPhysAddr256; + /**< Physical address of the SHA-256 subLabels */ + Cpa64U sublabelPhysAddr384; + /**< Physical address of the SHA-384 subLabels */ + Cpa64U sublabelPhysAddrChaChaPoly; + /**< Physical address of the ChaChaPoly subLabels */ +} lac_sym_key_tls_hkdf_sub_labels_t; + +/** + ****************************************************************************** + * @ingroup LacSymKey + * This function prints the stats to standard out. + * + * @retval CPA_STATUS_SUCCESS Status Success + * @retval CPA_STATUS_FAIL General failure + * + *****************************************************************************/ +void LacKeygen_StatsShow(CpaInstanceHandle instanceHandle); + +#endif diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_partial.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_partial.h new file mode 100644 index 000000000000..b3088784a273 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_partial.h @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_partial.h + * + * @defgroup LacSymPartial Partial Packets + * + * @ingroup LacSymCommon + * + * Partial packet handling code + * + * @lld_start + * + * <b>Partials In Flight</b>\n + * The API states that for partial packets the client should not submit + * the next partial request until the callback for the current partial has + * been called. We have chosen to enforce this rather than letting the user + * proceed where they would get an incorrect digest, cipher result. + * + * Maintain a SpinLock for partials in flight per session. Try and acquire this + * SpinLock. If it cant be acquired return an error straight away to the client + * as there is already a partial in flight. There is no blocking in the data + * path for this. + * + * By preventing any other partials from coming in while a partial is in flight + * we can check and change the state of the session without having to lock + * round it (dont want to have to lock and block in the data path). The state + * of the session indicates the previous packet type that a request was + * successfully completed for. The last packet type is only updated for partial + * packets. This state determines the packet types that can be accepted. + * e.g a last partial will not be accepted unless the previous packet was a + * partial. By only allowing one partial packet to be in flight, there is no + * need to lock around the update of the previous packet type for the session. + * + * The ECB Cipher mode, ciphers each block separately. No state is maintained + * between blocks. There is no need to wait for the callback for previous + * partial in ECB mode as the result of the previous partial has no impact on + * it. The API and our implementation only allows 1 partial packet to be in + * flight per session, therefore a partial packet request for ECB mode must + * be fully completed (ie. callback called) before the next partial request + * can be issued. + * + * <b>Partial Ordering</b>\n + * The ordering that the user submits partial packets will be checked. + * (we could have let the user proceed where they will get an incorrect + * digest/cipher result but chose against this). + * + * -# Maintain the last packet type of a partial operation for the session. If + * there have been no previous partials, we will accept only first partials + * -# The state must be set to partial before we will accept a final partial. + * i.e. a partial request must have already completed. + * + * The last packet type is updated in the callback for partial packets as this + * is the only place we can guarantee that a partial packet operation has been + * completed. When a partial completes the state can be updated from FULL to + * PARTIAL. The SpinLock for partial packets in flight for the session can be + * unlocked at this point. On a final Partial request the last packet type is + * reset back to FULL. NOTE: This is not done at the same time as the check in + * the perform as if an error occurs we would have to roll back the state + * + * For Hash mode it is possible to interleave full and a single partial + * packet stream in a session as the hash state buffer is updated for partial + * packets. It is not touched by full packets. For cipher mode, as the client + * manages the state, they can interleave full and a single partial packets. + * For ARC4, the state is managed internally and the packet type will always + * be set to partial internally. + * + * @lld_end + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_PARTIAL_H +#define LAC_SYM_PARTIAL_H + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/***************************************************************************/ + +/** +******************************************************************************* +* @ingroup LacSymPartial +* check if partial packet request is valid for a session +* +* @description +* This function checks to see if there is a partial packet request in +* flight and then if the partial state is correct +* +* @param[in] packetType Partial packet request +* @param[in] partialState Partial state of session +* +* @retval CPA_STATUS_SUCCESS Normal Operation +* @retval CPA_STATUS_INVALID_PARAM Invalid Parameter +* +*****************************************************************************/ +CpaStatus LacSym_PartialPacketStateCheck(CpaCySymPacketType packetType, + CpaCySymPacketType partialState); + +/** +******************************************************************************* +* @ingroup LacSymPartial +* update the state of the partial packet in a session +* +* @description +* This function is called in callback operation. It updates the state +* of a partial packet in a session and indicates that there is no +* longer a partial packet in flight for the session +* +* @param[in] packetType Partial packet request +* @param[out] pPartialState Pointer to partial state of session +* +*****************************************************************************/ +void LacSym_PartialPacketStateUpdate(CpaCySymPacketType packetType, + CpaCySymPacketType *pPartialState); + +#endif /* LAC_SYM_PARTIAL_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat.h new file mode 100644 index 000000000000..af49764b6498 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat.h @@ -0,0 +1,209 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat.h + * + * @defgroup LacSymQat Symmetric QAT + * + * @ingroup LacSym + * + * Interfaces for populating the qat structures for a symmetric operation + * + * @lld_start + * + * @lld_overview + * This file documents the interfaces for populating the qat structures + * that are common for all symmetric operations. + * + * @lld_dependencies + * - \ref LacSymQatHash "Hash QAT Comms" Sym Qat commons for Hash + * - \ref LacSymQat_Cipher "Cipher QAT Comms" Sym Qat commons for Cipher + * - OSAL: logging + * - \ref LacMem "Memory" - Inline memory functions + * + * @lld_initialisation + * This component is initialied during the LAC initialisation sequence. It + * is called by the Symmetric Initialisation function. + * + * @lld_module_algorithms + * + * @lld_process_context + * Refer to \ref LacHash "Hash" and \ref LacCipher "Cipher" for sequence + * diagrams to see their interactions with this code. + * + * + * @lld_end + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_H +#define LAC_SYM_QAT_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "icp_accel_devices.h" +#include "icp_qat_fw_la.h" +#include "icp_qat_hw.h" +#include "lac_session.h" +#include "sal_qat_cmn_msg.h" +#include "lac_common.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +#define LAC_SYM_DEFAULT_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_SGL +#define LAC_SYM_DP_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT +#define LAC_SYM_KEY_QAT_PTR_TYPE QAT_COMN_PTR_TYPE_FLAT +/**< @ingroup LacSymQat + * LAC SYM Source & Destination buffer type (FLAT/SGL) */ + +#define LAC_QAT_SYM_REQ_SZ_LW 32 +#define SYM_TX_MSG_SIZE (LAC_QAT_SYM_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES) +#define NRBG_TX_MSG_SIZE (LAC_QAT_SYM_REQ_SZ_LW * LAC_LONG_WORD_IN_BYTES) + +#define LAC_QAT_SYM_RESP_SZ_LW 8 +#define SYM_RX_MSG_SIZE (LAC_QAT_SYM_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES) +#define NRBG_RX_MSG_SIZE (LAC_QAT_SYM_RESP_SZ_LW * LAC_LONG_WORD_IN_BYTES) + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Symmetric crypto response handler + * + * @description + * This function handles the symmetric crypto response + * + * @param[in] trans_handle transport handle (if ICP_QAT_DBG set) + * @param[in] instanceHandle void* pRespMsg + * + * + *****************************************************************************/ +void LacSymQat_SymRespHandler(void *pRespMsg); + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Initialise the Symmetric QAT code + * + * @description + * This function initialises the symmetric QAT code + * + * @param[in] device Pointer to the acceleration device + * structure + * @param[in] instanceHandle Instance handle + * @param[in] numSymRequests Number of concurrent requests a pair + * (tx and rx) need to support + * + * @return CPA_STATUS_SUCCESS Operation successful + * @return CPA_STATUS_FAIL Initialisation Failed + * + *****************************************************************************/ +CpaStatus LacSymQat_Init(CpaInstanceHandle instanceHandle); + +/** + ******************************************************************************* + * @ingroup LacSymQat + * Register a response handler function for a symmetric command ID + * + * @description + * This function registers a response handler function for a symmetric + * operation. + * + * Note: This operation should only be performed once by the init function + * of a component. There is no corresponding deregister function, but + * registering a NULL function pointer will have the same effect. There + * MUST not be any requests in flight when calling this function. + * + * @param[in] lacCmdId Command Id of operation + * @param[in] pCbHandler callback handler function + * + * @return None + * + *****************************************************************************/ +void LacSymQat_RespHandlerRegister(icp_qat_fw_la_cmd_id_t lacCmdId, + sal_qat_resp_handler_func_t pCbHandler); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * get the QAT packet type + * + * @description + * This function returns the QAT packet type for a LAC packet type. The + * LAC packet type does not indicate a first partial. therefore for a + * partial request, the previous packet type needs to be looked at to + * figure out if the current partial request is a first partial. + * + * + * @param[in] packetType LAC Packet type + * @param[in] packetState LAC Previous Packet state + * @param[out] pQatPacketType Packet type using the QAT macros + * + * @return none + * + *****************************************************************************/ +void LacSymQat_packetTypeGet(CpaCySymPacketType packetType, + CpaCySymPacketType packetState, + Cpa32U *pQatPacketType); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * Populate the command flags based on the packet type + * + * @description + * This function populates the following flags in the Symmetric Crypto + * service_specif_flags field of the common header of the request: + * - LA_PARTIAL + * - UPDATE_STATE + * - RET_AUTH_RES + * - CMP_AUTH_RES + * based on looking at the input params listed below. + * + * @param[in] qatPacketType Packet type + * @param[in] cmdId Command Id + * @param[in] cipherAlgorithm Cipher Algorithm + * @param[out] pLaCommandFlags Command Flags + * + * @return none + * + *****************************************************************************/ +void LacSymQat_LaPacketCommandFlagSet(Cpa32U qatPacketType, + icp_qat_fw_la_cmd_id_t laCmdId, + CpaCySymCipherAlgorithm cipherAlgorithm, + Cpa16U *pLaCommandFlags, + Cpa32U ivLenInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat + * + * + * @description + * defaults the common request service specific flags + * + * @param[in] laCmdFlags Common request service specific flags + * @param[in] symOp Type of operation performed e.g hash or cipher + * + * @return none + * + *****************************************************************************/ + +void LacSymQat_LaSetDefaultFlags(icp_qat_fw_serv_specif_flags *laCmdFlags, + CpaCySymOp symOp); + +#endif /* LAC_SYM_QAT_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_cipher.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_cipher.h new file mode 100644 index 000000000000..2360aa53633f --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_cipher.h @@ -0,0 +1,291 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_cipher.h + * + * @defgroup LacSymQat_Cipher Cipher QAT + * + * @ingroup LacSymQat + * + * external interfaces for populating QAT structures for cipher operations. + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_CIPHER_H +#define LAC_SYM_QAT_CIPHER_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa_cy_sym.h" +#include "icp_qat_fw_la.h" +#include "lac_session.h" +#include "lac_sal_types_crypto.h" + +/* + ************************************************************************** + * @ingroup LacSymQat_Cipher + * + * @description + * Defines for building the cipher request params cache + * + ************************************************************************** */ + +#define LAC_SYM_QAT_CIPHER_NEXT_ID_BIT_OFFSET 24 +#define LAC_SYM_QAT_CIPHER_CURR_ID_BIT_OFFSET 16 +#define LAC_SYM_QAT_CIPHER_STATE_SIZE_BIT_OFFSET 8 +#define LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_GCM_SPC 9 +#define LAC_SYM_QAT_CIPHER_OFFSET_IN_DRAM_CHACHA_SPC 2 +#define LAC_SYM_QAT_CIPHER_STATE_SIZE_SPC 48 +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Retrieve the cipher block size in bytes for a given algorithm + * + * @description + * This function returns a hard-coded block size for the specific cipher + * algorithm + * + * @param[in] cipherAlgorithm Cipher algorithm for the current session + * + * @retval The block size, in bytes, for the given cipher algorithm + * + *****************************************************************************/ +Cpa8U +LacSymQat_CipherBlockSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Retrieve the cipher IV/state size in bytes for a given algorithm + * + * @description + * This function returns a hard-coded IV/state size for the specific cipher + * algorithm + * + * @param[in] cipherAlgorithm Cipher algorithm for the current session + * + * @retval The IV/state size, in bytes, for the given cipher algorithm + * + *****************************************************************************/ +Cpa32U LacSymQat_CipherIvSizeBytesGet(CpaCySymCipherAlgorithm cipherAlgorithm); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Populate the cipher request params structure + * + * @description + * This function is passed a pointer to the 128B request block. + * (This memory must be allocated prior to calling this function). It + * populates: + * - the cipher fields of the req_params block in the request. No + * need to zero this first, all fields will be populated. + * - the corresponding CIPH_IV_FLD flag in the serv_specif_flags field + * of the common header. + * To do this it uses the parameters described below and the following + *fields from the request block which must be populated prior to calling this + *function: + * - cd_ctrl.cipher_state_sz + * - UPDATE_STATE flag in comn_hdr.serv_specif_flags + * + * + * @param[in] pReq Pointer to request block. + * * + * @param[in] cipherOffsetInBytes Offset to cipher data in user data buffer + * + * @param[in] cipherLenInBytes Length of cipher data in buffer + * + * @param[in] ivBufferPhysAddr Physical address of aligned IV/state + * buffer + * @param[in] pIvBufferVirt Virtual address of aligned IV/state + * buffer + * @retval void + * + *****************************************************************************/ +CpaStatus LacSymQat_CipherRequestParamsPopulate(icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U cipherOffsetInBytes, + Cpa32U cipherLenInBytes, + Cpa64U ivBufferPhysAddr, + Cpa8U *pIvBufferVirt); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * Derive initial ARC4 cipher state from a base key + * + * @description + * An initial state for an ARC4 cipher session is derived from the base + * key provided by the user, using the ARC4 Key Scheduling Algorithm (KSA) + * + * @param[in] pKey The base key provided by the user + * + * @param[in] keyLenInBytes The length of the base key provided. + * The range of valid values is 1-256 bytes + * + * @param[out] pArc4CipherState The initial state is written to this buffer, + * including i and j values, and 6 bytes of padding + * so 264 bytes must be allocated for this buffer + * by the caller + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherArc4StateInit(const Cpa8U *pKey, + Cpa32U keyLenInBytes, + Cpa8U *pArc4CipherState); + +/** + ****************************************************************************** + * @ingroup LacSymQat_CipherXTSModeUpdateKeyLen + * Update the initial XTS key after the first partial has been received. + * + * @description + * For XTS mode using partial packets, after the first partial response + * has been received, the the key length needs to be halved for subsequent + * partials. + * + * @param[in] pSessionDesc The session descriptor. + * + * @param[in] newKeySizeInBytes The new key size.. + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherXTSModeUpdateKeyLen(lac_session_desc_t *pSessionDesc, + Cpa32U newKeySizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherCtrlBlockInitialize() + * + * @description + * intialize the cipher control block with all zeros + * + * @param[in] pMsg Pointer to the common request message + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherCtrlBlockInitialize(icp_qat_fw_la_bulk_req_t *pMsg); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherCtrlBlockWrite() + * + * @description + * This function populates the cipher control block of the common request + * message + * + * @param[in] pMsg Pointer to the common request message + * + * @param[in] cipherAlgorithm Cipher Algorithm to be used + * + * @param[in] targetKeyLenInBytes cipher key length in bytes of selected + * algorithm + * + * @param[out] nextSlice SliceID for next control block + * entry. This value is known only by + * the calling component + * + * @param[out] cipherCfgOffsetInQuadWord Offset into the config table in QW + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherCtrlBlockWrite(icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa32U cipherAlgorithm, + Cpa32U targetKeyLenInBytes, + icp_qat_fw_slice_t nextSlice, + Cpa8U cipherCfgOffsetInQuadWord); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherHwBlockPopulateCfgData() + * + * @description + * Populate the physical HW block with config data + * + * @param[in] pSession Pointer to the session data + * + * @param[in] pCipherHwBlock pointer to the hardware control block + * in the common message + * + * @param[in] pSizeInBytes + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherHwBlockPopulateCfgData(lac_session_desc_t *pSession, + const void *pCipherHwBlock, + Cpa32U *pSizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherGetCfgData() + * + * @description + * setup the config data for cipher + * + * @param[in] pSession Pointer to the session data + * + * @param[in] pAlgorithm * + * @param[in] pMode + * @param[in] pDir + * @param[in] pKey_convert + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherGetCfgData(lac_session_desc_t *pSession, + icp_qat_hw_cipher_algo_t *pAlgorithm, + icp_qat_hw_cipher_mode_t *pMode, + icp_qat_hw_cipher_dir_t *pDir, + icp_qat_hw_cipher_convert_t *pKey_convert); + +/** + ****************************************************************************** + * @ingroup LacSymQat_Cipher + * LacSymQat_CipherHwBlockPopulateKeySetup() + * + * @description + * populate the key setup data in the cipher hardware control block + * in the common request message + * + * param[in] pCipherSetupData Pointer to cipher setup data + * + * @param[in] targetKeyLenInBytes Target key length. If key length given + * in cipher setup data is less that this, + * the key will be "rounded up" to this + * target length by padding it with 0's. + * In normal no-padding case, the target + * key length MUST match the key length + * in the cipher setup data. + * + * @param[in] pCipherHwBlock Pointer to the cipher hardware block + * + * @param[out] pCipherHwBlockSizeBytes Size in bytes of cipher setup block + * + * + * @retval void + * + *****************************************************************************/ +void LacSymQat_CipherHwBlockPopulateKeySetup( + const CpaCySymCipherSetupData *pCipherSetupData, + Cpa32U targetKeyLenInBytes, + const void *pCipherHwBlock, + Cpa32U *pCipherHwBlockSizeBytes); + +#endif /* LAC_SYM_QAT_CIPHER_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash.h new file mode 100644 index 000000000000..147e10f573f0 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash.h @@ -0,0 +1,309 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_hash.h + * + * @defgroup LacSymQatHash Hash QAT + * + * @ingroup LacSymQat + * + * interfaces for populating qat structures for a hash operation + * + *****************************************************************************/ + +/*****************************************************************************/ + +#ifndef LAC_SYM_QAT_HASH_H +#define LAC_SYM_QAT_HASH_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "icp_qat_fw_la.h" +#include "icp_qat_hw.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "lac_common.h" + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * hash precomputes + * + * @description + * This structure contains infomation on the hash precomputes + * + *****************************************************************************/ +typedef struct lac_sym_qat_hash_precompute_info_s { + Cpa8U *pState1; + /**< state1 pointer */ + Cpa32U state1Size; + /**< state1 size */ + Cpa8U *pState2; + /**< state2 pointer */ + Cpa32U state2Size; + /**< state2 size */ +} lac_sym_qat_hash_precompute_info_t; + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * hash state prefix buffer info + * + * @description + * This structure contains infomation on the hash state prefix aad buffer + * + *****************************************************************************/ +typedef struct lac_sym_qat_hash_state_buffer_info_s { + Cpa64U pDataPhys; + /**< Physical pointer to the hash state prefix buffer */ + Cpa8U *pData; + /**< Virtual pointer to the hash state prefix buffer */ + Cpa8U stateStorageSzQuadWords; + /**< hash state storage size in quad words */ + Cpa8U prefixAadSzQuadWords; + /**< inner prefix/aad and outer prefix size in quad words */ +} lac_sym_qat_hash_state_buffer_info_t; + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Init the hash specific part of the content descriptor. + * + * @description + * This function populates the hash specific fields of the control block + * and the hardware setup block for a digest session. This function sets + * the size param to hold the size of the hash setup block. + * + * In the case of hash only, the content descriptor will contain just a + * hash control block and hash setup block. In the case of chaining it + * will contain the hash control block and setup block along with the + * control block and setup blocks of additional services. + * + * Note: The memory for the content descriptor MUST be allocated prior to + * calling this function. The memory for the hash control block and hash + * setup block MUST be set to 0 prior to calling this function. + * + * @image html contentDescriptor.png "Content Descriptor" + * + * @param[in] pMsg Pointer to req Parameter Footer + * + * @param[in] pHashSetupData Pointer to the hash setup data as + * defined in the LAC API. + * + * @param[in] pHwBlockBase Pointer to the base of the hardware + * setup block + * + * @param[in] hashBlkOffsetInHwBlock Offset in quad-words from the base of + * the hardware setup block where the + * hash block will start. This offset + * is stored in the control block. It + * is used to figure out where to write + * that hash setup block. + * + * @param[in] nextSlice SliceID for next control block + * entry This value is known only by + * the calling component + * + * @param[in] qatHashMode QAT hash mode + * + * @param[in] useSymConstantsTable Indicate if Shared-SRAM constants table + * is used for this session. If TRUE, the + * h/w setup block is NOT populated + * + * @param[in] useOptimisedContentDesc Indicate if optimised content desc + * is used for this session. + * + * @param[in] pPrecompute For auth mode, this is the pointer + * to the precompute data. Otherwise this + * should be set to NULL + * + * @param[out] pHashBlkSizeInBytes size in bytes of hash setup block + * + * @return void + * + *****************************************************************************/ +void +LacSymQat_HashContentDescInit(icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + void *pHwBlockBase, + Cpa32U hashBlkOffsetInHwBlock, + icp_qat_fw_slice_t nextSlice, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean useSymConstantsTable, + CpaBoolean useOptimisedContentDesc, + lac_sym_qat_hash_precompute_info_t *pPrecompute, + Cpa32U *pHashBlkSizeInBytes); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Calculate the size of the hash state prefix aad buffer + * + * @description + * This function inspects the hash control block and based on the values + * in the fields, it calculates the size of the hash state prefix aad + * buffer. + * + * A partial packet processing request is possible at any stage during a + * hash session. In this case, there will always be space for the hash + * state storage field of the hash state prefix buffer. When there is + * AAD data just the inner prefix AAD data field is used. + * + * @param[in] pMsg Pointer to the Request Message + * + * @param[out] pHashStateBuf Pointer to hash state prefix buffer info + * structure. + * + * @return None + * + *****************************************************************************/ +void LacSymQat_HashStatePrefixAadBufferSizeGet( + icp_qat_la_bulk_req_ftr_t *pMsg, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Populate the fields of the hash state prefix buffer + * + * @description + * This function populates the inner prefix/aad fields and/or the outer + * prefix field of the hash state prefix buffer. + * + * @param[in] pHashStateBuf Pointer to hash state prefix buffer info + * structure. + * + * @param[in] pMsg Pointer to the Request Message + * + * @param[in] pInnerPrefixAad Pointer to the Inner Prefix or Aad data + * This is NULL where if the data size is 0 + * + * @param[in] innerPrefixSize Size of inner prefix/aad data in bytes + * + * @param[in] pOuterPrefix Pointer to the Outer Prefix data. This is + * NULL where the data size is 0. + * + * @param[in] outerPrefixSize Size of the outer prefix data in bytes + * + * @return void + * + *****************************************************************************/ +void LacSymQat_HashStatePrefixAadBufferPopulate( + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + icp_qat_la_bulk_req_ftr_t *pMsg, + Cpa8U *pInnerPrefixAad, + Cpa8U innerPrefixSize, + Cpa8U *pOuterPrefix, + Cpa8U outerPrefixSize); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * Populate the hash request params structure + * + * @description + * This function is passed a pointer to the 128B Request block. + * (This memory must be allocated prior to calling this function). It + * populates the fields of this block using the parameters as described + * below. It is also expected that this structure has been set to 0 + * prior to calling this function. + * + * + * @param[in] pReq Pointer to 128B request block. + * + * @param[in] authOffsetInBytes start offset of data that the digest is to + * be computed on. + * + * @param[in] authLenInBytes Length of data digest calculated on + * + * @param[in] pService Pointer to service data + * + * @param[in] pHashStateBuf Pointer to hash state buffer info. This + * structure contains the pointers and sizes. + * If there is no hash state prefix buffer + * required, this parameter can be set to NULL + * + * @param[in] qatPacketType Packet type using QAT macros. The hash + * state buffer pointer and state size will be + * different depending on the packet type + * + * @param[in] hashResultSize Size of the final hash result in bytes. + * + * @param[in] digestVerify Indicates if verify is enabled or not + * + * @param[in] pAuthResult Virtual pointer to digest + * + * @return CPA_STATUS_SUCCESS or CPA_STATUS_FAIL + * + *****************************************************************************/ +CpaStatus LacSymQat_HashRequestParamsPopulate( + icp_qat_fw_la_bulk_req_t *pReq, + Cpa32U authOffsetInBytes, + Cpa32U authLenInBytes, + sal_service_t *pService, + lac_sym_qat_hash_state_buffer_info_t *pHashStateBuf, + Cpa32U qatPacketType, + Cpa32U hashResultSize, + CpaBoolean digestVerify, + Cpa8U *pAuthResult, + CpaCySymHashAlgorithm alg, + void *data); + +/** + ****************************************************************************** + * @ingroup LacSymQatHash + * + * + * @description + * This fn returns the QAT values for hash algorithm and nested fields + * + * + * @param[in] pInstance Pointer to service instance. + * + * @param[in] qatHashMode value for hash mode on the fw qat + *interface. + * + * @param[in] apiHashMode value for hash mode on the QA API. + * + * @param[in] apiHashAlgorithm value for hash algorithm on the QA API. + * + * @param[out] pQatAlgorithm Pointer to return fw qat value for + *algorithm. + * + * @param[out] pQatNested Pointer to return fw qat value for nested. + * + * + * @return + * none + * + *****************************************************************************/ +void LacSymQat_HashGetCfgData(CpaInstanceHandle pInstance, + icp_qat_hw_auth_mode_t qatHashMode, + CpaCySymHashMode apiHashMode, + CpaCySymHashAlgorithm apiHashAlgorithm, + icp_qat_hw_auth_algo_t *pQatAlgorithm, + CpaBoolean *pQatNested); + +void LacSymQat_HashSetupReqParamsMetaData( + icp_qat_la_bulk_req_ftr_t *pMsg, + CpaInstanceHandle instanceHandle, + const CpaCySymHashSetupData *pHashSetupData, + CpaBoolean hashStateBuffer, + icp_qat_hw_auth_mode_t qatHashMode, + CpaBoolean digestVerify); + +#endif /* LAC_SYM_QAT_HASH_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash_defs_lookup.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash_defs_lookup.h new file mode 100644 index 000000000000..23db82a3b180 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_hash_defs_lookup.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_hash_defs_lookup.h + * + * @defgroup LacSymQatHashDefsLookup Hash Defs Lookup + * + * @ingroup LacSymQatHash + * + * API to be used for the hash defs lookup table. + * + *****************************************************************************/ + +#ifndef LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H +#define LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H + +#include "cpa.h" +#include "cpa_cy_sym.h" + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* Finishing Hash algorithm +* @description +* This define points to the last available hash algorithm +* @NOTE: If a new algorithm is added to the api, this #define +* MUST be updated to being the last hash algorithm in the struct +* CpaCySymHashAlgorithm in the file cpa_cy_sym.h +*****************************************************************************/ +#define CPA_CY_HASH_ALG_END CPA_CY_SYM_HASH_SM3 + +/***************************************************************************/ + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash algorithm specific structure +* @description +* This structure contain constants specific to an algorithm. +*****************************************************************************/ +typedef struct lac_sym_qat_hash_alg_info_s { + Cpa32U digestLength; /**< Digest length in bytes */ + Cpa32U blockLength; /**< Block length in bytes */ + Cpa8U *initState; /**< Initialiser state for hash algorithm */ + Cpa32U stateSize; /**< size of above state in bytes */ +} lac_sym_qat_hash_alg_info_t; + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash qat specific structure +* @description +* This structure contain constants as defined by the QAT for an +* algorithm. +*****************************************************************************/ +typedef struct lac_sym_qat_hash_qat_info_s { + Cpa32U algoEnc; /**< QAT Algorithm encoding */ + Cpa32U authCounter; /**< Counter value for Auth */ + Cpa32U state1Length; /**< QAT state1 length in bytes */ + Cpa32U state2Length; /**< QAT state2 length in bytes */ +} lac_sym_qat_hash_qat_info_t; + +/** +****************************************************************************** +* @ingroup LacSymQatHashDefsLookup +* hash defs structure +* @description +* This type contains pointers to the hash algorithm structure and +* to the hash qat specific structure +*****************************************************************************/ +typedef struct lac_sym_qat_hash_defs_s { + lac_sym_qat_hash_alg_info_t *algInfo; + /**< pointer to hash info structure */ + lac_sym_qat_hash_qat_info_t *qatInfo; + /**< pointer to hash QAT info structure */ +} lac_sym_qat_hash_defs_t; + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* initialise the hash lookup table +* +* @description +* This function initialises the digest lookup table. +* +* @note +* This function does not have a corresponding shutdown function. +* +* @return CPA_STATUS_SUCCESS Operation successful +* @return CPA_STATUS_RESOURCE Allocating of hash lookup table failed +* +*****************************************************************************/ +CpaStatus LacSymQat_HashLookupInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* get hash algorithm specific structure from lookup table +* +* @description +* This function looks up the hash lookup array for a structure +* containing data specific to a hash algorithm. The hashAlgorithm enum +* value MUST be in the correct range prior to calling this function. +* +* @param[in] hashAlgorithm Hash Algorithm +* @param[out] ppHashAlgInfo Hash Alg Info structure +* +* @return None +* +*****************************************************************************/ +void LacSymQat_HashAlgLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_alg_info_t **ppHashAlgInfo); + +/** +******************************************************************************* +* @ingroup LacSymQatHashDefsLookup +* get hash defintions from lookup table. +* +* @description +* This function looks up the hash lookup array for a structure +* containing data specific to a hash algorithm. This includes both +* algorithm specific info and qat specific infro. The hashAlgorithm enum +* value MUST be in the correct range prior to calling this function. +* +* @param[in] hashAlgorithm Hash Algorithm +* @param[out] ppHashDefsInfo Hash Defs structure +* +* @return void +* +*****************************************************************************/ +void LacSymQat_HashDefsLookupGet(CpaInstanceHandle instanceHandle, + CpaCySymHashAlgorithm hashAlgorithm, + lac_sym_qat_hash_defs_t **ppHashDefsInfo); + +#endif /* LAC_SYM_QAT_HASH_DEFS_LOOKUP_P_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_key.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_key.h new file mode 100644 index 000000000000..a6a5d5169e11 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_qat_key.h @@ -0,0 +1,189 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ***************************************************************************** + * @file lac_sym_qat_key.h + * + * @defgroup LacSymQatKey Key QAT + * + * @ingroup LacSymQat + * + * interfaces for populating qat structures for a key operation + * + *****************************************************************************/ + +#ifndef LAC_SYM_QAT_KEY_H +#define LAC_SYM_QAT_KEY_H + +#include "cpa.h" +#include "lac_sym.h" +#include "icp_qat_fw_la.h" + +/** +****************************************************************************** +* @ingroup LacSymQatKey +* Number of bytes generated per iteration +* @description +* This define is the number of bytes generated per iteration +*****************************************************************************/ +#define LAC_SYM_QAT_KEY_SSL_BYTES_PER_ITERATION (16) + +/** +****************************************************************************** +* @ingroup LacSymQatKey +* Shift to calculate the number of iterations +* @description +* This define is the shift to calculate the number of iterations +*****************************************************************************/ +#define LAC_SYM_QAT_KEY_SSL_ITERATIONS_SHIFT LAC_16BYTE_ALIGNMENT_SHIFT + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the SSL request +* +* @description +* Populate the SSL request +* +* @param[out] pKeyGenReqHdr Pointer to Key Generation request Header +* @param[out] pKeyGenReqMid Pointer to LW's 14/15 of Key Gen request +* @param[in] generatedKeyLenInBytes Length of Key generated +* @param[in] labelLenInBytes Length of Label +* @param[in] secretLenInBytes Length of Secret +* @param[in] iterations Number of iterations. This is related +* to the label length. +* +* @return None +* +*****************************************************************************/ +void +LacSymQat_KeySslRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelLenInBytes, + Cpa32U secretLenInBytes, + Cpa32U iterations); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS request +* +* @description +* Populate the TLS request +* +* @param[out] pKeyGenReq Pointer to Key Generation request +* @param[in] generatedKeyLenInBytes Length of Key generated +* @param[in] labelLenInBytes Length of Label +* @param[in] secretLenInBytes Length of Secret +* @param[in] seedLenInBytes Length of Seed +* @param[in] cmdId Command Id to differentiate TLS versions +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsRequestPopulate( + icp_qat_fw_la_key_gen_common_t *pKeyGenReqParams, + Cpa32U generatedKeyLenInBytes, + Cpa32U labelLenInBytes, + Cpa32U secretLenInBytes, + Cpa8U seedLenInBytes, + icp_qat_fw_la_cmd_id_t cmdId); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate MGF request +* +* @description +* Populate MGF request +* +* @param[out] pKeyGenReqHdr Pointer to Key Generation request Header +* @param[out] pKeyGenReqMid Pointer to LW's 14/15 of Key Gen request +* @param[in] seedLenInBytes Length of Seed +* @param[in] maskLenInBytes Length of Mask +* @param[in] hashLenInBytes Length of hash +* +* @return None +* +*****************************************************************************/ +void +LacSymQat_KeyMgfRequestPopulate(icp_qat_la_bulk_req_hdr_t *pKeyGenReqHdr, + icp_qat_fw_la_key_gen_common_t *pKeyGenReqMid, + Cpa8U seedLenInBytes, + Cpa16U maskLenInBytes, + Cpa8U hashLenInBytes); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the SSL key material input +* +* @description +* Populate the SSL key material input +* +* @param[in] pService Pointer to service +* @param[out] pSslKeyMaterialInput Pointer to SSL key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* @param[in] pSecret Pointer to Secret +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeySslKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_ssl_key_material_input_t *pSslKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr, + void *pSecret); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS key material input +* +* @description +* Populate the TLS key material input +* +* @param[in] pService Pointer to service +* @param[out] pTlsKeyMaterialInput Pointer to TLS key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_tls_key_material_input_t *pTlsKeyMaterialInput, + void *pSeed, + Cpa64U labelPhysAddr); + +/** +******************************************************************************* +* @ingroup LacSymKey +* Populate the TLS HKDF key material input +* +* @description +* Populate the TLS HKDF key material input +* +* @param[in] pService Pointer to service +* @param[out] pTlsKeyMaterialInput Pointer to TLS key material input +* @param[in] pSeed Pointer to Seed +* @param[in] labelPhysAddr Physical address of the label +* @param[in] cmdId Command ID +* +* @return None +* +*****************************************************************************/ +void LacSymQat_KeyTlsHKDFKeyMaterialInputPopulate( + sal_service_t *pService, + icp_qat_fw_la_hkdf_key_material_input_t *pTlsKeyMaterialInput, + CpaCyKeyGenHKDFOpData *pKeyGenTlsOpData, + Cpa64U subLabelsPhysAddr, + icp_qat_fw_la_cmd_id_t cmdId); + +#endif /* LAC_SYM_QAT_KEY_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_queue.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_queue.h new file mode 100644 index 000000000000..d7a5cd3c9e92 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_queue.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + ****************************************************************************** + * @file lac_sym_queue.h + * + * @defgroup LacSymQueue Symmetric request queueing functions + * + * @ingroup LacSym + * + * Function prototypes for sending/queuing symmetric requests + *****************************************************************************/ + +#ifndef LAC_SYM_QUEUE_H +#define LAC_SYM_QUEUE_H + +#include "cpa.h" +#include "lac_session.h" +#include "lac_sym.h" + +/** +******************************************************************************* +* @ingroup LacSymQueue +* Send a request message to the QAT, or queue it if necessary +* +* @description +* This function will send a request message to the QAT. However, if a +* blocking condition exists on the session (e.g. partial packet in flight, +* precompute in progress), then the message will instead be pushed on to +* the request queue for the session and will be sent later to the QAT +* once the blocking condition is cleared. +* +* @param[in] instanceHandle Handle for instance of QAT +* @param[in] pRequest Pointer to request cookie +* @param[out] pSessionDesc Pointer to session descriptor +* +* +* @retval CPA_STATUS_SUCCESS Success +* @retval CPA_STATUS_FAIL Function failed. +* @retval CPA_STATUS_RESOURCE Problem Acquiring system resource +* @retval CPA_STATUS_RETRY Failed to send message to QAT due to queue +* full condition +* +*****************************************************************************/ +CpaStatus LacSymQueue_RequestSend(const CpaInstanceHandle instanceHandle, + lac_sym_bulk_cookie_t *pRequest, + lac_session_desc_t *pSessionDesc); + +#endif /* LAC_SYM_QUEUE_H */ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_stats.h b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_stats.h new file mode 100644 index 000000000000..b5d823420163 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/include/lac_sym_stats.h @@ -0,0 +1,191 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright(c) 2007-2022 Intel Corporation */ +/* $FreeBSD$ */ + +/** + *************************************************************************** + * @file lac_sym_stats.h + * + * @defgroup LacSymCommon Symmetric Common + * + * @ingroup LacSym + * + * Symetric Common consists of common statistics, buffer and partial packet + * functionality. + * + ***************************************************************************/ + +/** + *************************************************************************** + * @defgroup LacSymStats Statistics + * + * @ingroup LacSymCommon + * + * definitions and prototypes for LAC symmetric statistics. + * + * @lld_start + * In the LAC API the stats fields are defined as Cpa32U but + * QatUtilsAtomic is the type that the atomic API supports. Therefore we + * need to define a structure internally with the same fields as the API + * stats structure, but each field must be of type QatUtilsAtomic. + * + * - <b>Incrementing Statistics:</b>\n + * Atomically increment the statistic on the internal stats structure. + * + * - <b>Providing a copy of the stats back to the user:</b>\n + * Use atomicGet to read the atomic variable for each stat field in the + * local internal stat structure. These values are saved in structure + * (as defined by the LAC API) that the client will provide a pointer + * to as a parameter. + * + * - <b>Stats Show:</b>\n + * Use atomicGet to read the atomic variables for each field in the local + * internal stat structure and print to the screen + * + * - <b>Stats Array:</b>\n + * A macro is used to get the offset off the stat in the structure. This + * offset is passed to a function which uses it to increment the stat + * at that offset. + * + * @lld_end + * + ***************************************************************************/ + +/***************************************************************************/ + +#ifndef LAC_SYM_STATS_H +#define LAC_SYM_STATS_H + +/* +****************************************************************************** +* Include public/global header files +****************************************************************************** +*/ + +#include "cpa.h" +#include "cpa_cy_sym.h" +#include "cpa_cy_common.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ + +/** +******************************************************************************* +* @ingroup LacSymStats +* increment a symmetric statistic +* +* @description +* Increment the statistics +* +* @param statistic IN The field in the symmetric statistics structure to be +* incremented +* @param instanceHandle IN engine Id Number +* +* @retval None +* +*****************************************************************************/ +#define LAC_SYM_STAT_INC(statistic, instanceHandle) \ + LacSym_StatsInc(offsetof(CpaCySymStats64, statistic), instanceHandle) + +/** +******************************************************************************* +* @ingroup LacSymStats +* initialises the symmetric stats +* +* @description +* This function allocates and initialises the stats array to 0 +* +* @param instanceHandle Instance Handle +* +* @retval CPA_STATUS_SUCCESS initialisation successful +* @retval CPA_STATUS_RESOURCE array allocation failed +* +*****************************************************************************/ +CpaStatus LacSym_StatsInit(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Frees the symmetric stats +* +* @description +* This function frees the stats array +* +* @param instanceHandle Instance Handle +* +* @retval None +* +*****************************************************************************/ +void LacSym_StatsFree(CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Inrement a stat +* +* @description +* This function incrementes a stat for a specific engine. +* +* @param offset IN offset of stat field in structure +* @param instanceHandle IN qat Handle +* +* @retval None +* +*****************************************************************************/ +void LacSym_StatsInc(Cpa32U offset, CpaInstanceHandle instanceHandle); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Copy the contents of the statistics structure for an engine +* +* @description +* This function copies the 32bit symmetric statistics structure for +* a specific engine into an address supplied as a parameter. +* +* @param instanceHandle IN engine Id Number +* @param pSymStats OUT stats structure to copy the stats for the into +* +* @retval None +* +*****************************************************************************/ +void LacSym_Stats32CopyGet(CpaInstanceHandle instanceHandle, + struct _CpaCySymStats *const pSymStats); + +/** +******************************************************************************* +* @ingroup LacSymStats +* Copy the contents of the statistics structure for an engine +* +* @description +* This function copies the 64bit symmetric statistics structure for +* a specific engine into an address supplied as a parameter. +* +* @param instanceHandle IN engine Id Number +* @param pSymStats OUT stats structure to copy the stats for the into +* +* @retval None +* +*****************************************************************************/ +void LacSym_Stats64CopyGet(CpaInstanceHandle instanceHandle, + CpaCySymStats64 *const pSymStats); + +/** +******************************************************************************* +* @ingroup LacSymStats +* print the symmetric stats to standard output +* +* @description +* The statistics for symmetric are printed to standard output. +* +* @retval None +* +* @see LacSym_StatsCopyGet() +* +*****************************************************************************/ +void LacSym_StatsShow(CpaInstanceHandle instanceHandle); + +#endif /*LAC_SYM_STATS_H_*/ diff --git a/sys/dev/qat/qat_api/common/crypto/sym/key/lac_sym_key.c b/sys/dev/qat/qat_api/common/crypto/sym/key/lac_sym_key.c new file mode 100644 index 000000000000..2f27a1781876 --- /dev/null +++ b/sys/dev/qat/qat_api/common/crypto/sym/key/lac_sym_key.c @@ -0,0 +1,3021 @@ +/*************************************************************************** + * + * <COPYRIGHT_TAG> + * + ***************************************************************************/ + +/** + ***************************************************************************** + * @file lac_sym_key.c + * + * @ingroup LacSymKey + * + * This file contains the implementation of all keygen functionality + * + *****************************************************************************/ + +/* +******************************************************************************* +* Include public/global header files +******************************************************************************* +*/ +#include "cpa.h" +#include "cpa_cy_key.h" +#include "cpa_cy_im.h" + +/* +******************************************************************************* +* Include private header files +******************************************************************************* +*/ +#include "icp_accel_devices.h" +#include "icp_adf_debug.h" +#include "icp_adf_init.h" +#include "icp_adf_transport.h" + +#include "qat_utils.h" + +#include "lac_log.h" +#include "lac_hooks.h" +#include "lac_sym.h" +#include "lac_sym_qat_hash_defs_lookup.h" +#include "lac_sym_qat.h" +#include "lac_sal.h" +#include "lac_sym_key.h" +#include "lac_sal_types_crypto.h" +#include "sal_service_state.h" +#include "lac_sym_qat_key.h" +#include "lac_sym_hash_defs.h" +#include "sal_statistics.h" + +/* Number of statistics */ +#define LAC_KEY_NUM_STATS (sizeof(CpaCyKeyGenStats64) / sizeof(Cpa64U)) + +#define LAC_KEY_STAT_INC(statistic, instanceHandle) \ + do { \ + sal_crypto_service_t *pService = NULL; \ + pService = (sal_crypto_service_t *)instanceHandle; \ + if (CPA_TRUE == \ + pService->generic_service_info.stats \ + ->bKeyGenStatsEnabled) { \ + qatUtilsAtomicInc( \ + &pService \ + ->pLacKeyStats[offsetof(CpaCyKeyGenStats64, \ + statistic) / \ + sizeof(Cpa64U)]); \ + } |