aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Baldwin <jhb@FreeBSD.org>2020-03-27 18:25:23 +0000
committerJohn Baldwin <jhb@FreeBSD.org>2020-03-27 18:25:23 +0000
commitc03414326909ed7a740be3ba63fbbef01fe513a8 (patch)
tree9067f28738df03bb4b685773c52ba32517468212
parent4d94781b4d9e03b8dbd6604d7e2280d342d3cf7e (diff)
downloadsrc-c03414326909.tar.gz
src-c03414326909.zip
Refactor driver and consumer interfaces for OCF (in-kernel crypto).
- The linked list of cryptoini structures used in session initialization is replaced with a new flat structure: struct crypto_session_params. This session includes a new mode to define how the other fields should be interpreted. Available modes include: - COMPRESS (for compression/decompression) - CIPHER (for simply encryption/decryption) - DIGEST (computing and verifying digests) - AEAD (combined auth and encryption such as AES-GCM and AES-CCM) - ETA (combined auth and encryption using encrypt-then-authenticate) Additional modes could be added in the future (e.g. if we wanted to support TLS MtE for AES-CBC in the kernel we could add a new mode for that. TLS modes might also affect how AAD is interpreted, etc.) The flat structure also includes the key lengths and algorithms as before. However, code doesn't have to walk the linked list and switch on the algorithm to determine which key is the auth key vs encryption key. The 'csp_auth_*' fields are always used for auth keys and settings and 'csp_cipher_*' for cipher. (Compression algorithms are stored in csp_cipher_alg.) - Drivers no longer register a list of supported algorithms. This doesn't quite work when you factor in modes (e.g. a driver might support both AES-CBC and SHA2-256-HMAC separately but not combined for ETA). Instead, a new 'crypto_probesession' method has been added to the kobj interface for symmteric crypto drivers. This method returns a negative value on success (similar to how device_probe works) and the crypto framework uses this value to pick the "best" driver. There are three constants for hardware (e.g. ccr), accelerated software (e.g. aesni), and plain software (cryptosoft) that give preference in that order. One effect of this is that if you request only hardware when creating a new session, you will no longer get a session using accelerated software. Another effect is that the default setting to disallow software crypto via /dev/crypto now disables accelerated software. Once a driver is chosen, 'crypto_newsession' is invoked as before. - Crypto operations are now solely described by the flat 'cryptop' structure. The linked list of descriptors has been removed. A separate enum has been added to describe the type of data buffer in use instead of using CRYPTO_F_* flags to make it easier to add more types in the future if needed (e.g. wired userspace buffers for zero-copy). It will also make it easier to re-introduce separate input and output buffers (in-kernel TLS would benefit from this). Try to make the flags related to IV handling less insane: - CRYPTO_F_IV_SEPARATE means that the IV is stored in the 'crp_iv' member of the operation structure. If this flag is not set, the IV is stored in the data buffer at the 'crp_iv_start' offset. - CRYPTO_F_IV_GENERATE means that a random IV should be generated and stored into the data buffer. This cannot be used with CRYPTO_F_IV_SEPARATE. If a consumer wants to deal with explicit vs implicit IVs, etc. it can always generate the IV however it needs and store partial IVs in the buffer and the full IV/nonce in crp_iv and set CRYPTO_F_IV_SEPARATE. The layout of the buffer is now described via fields in cryptop. crp_aad_start and crp_aad_length define the boundaries of any AAD. Previously with GCM and CCM you defined an auth crd with this range, but for ETA your auth crd had to span both the AAD and plaintext (and they had to be adjacent). crp_payload_start and crp_payload_length define the boundaries of the plaintext/ciphertext. Modes that only do a single operation (COMPRESS, CIPHER, DIGEST) should only use this region and leave the AAD region empty. If a digest is present (or should be generated), it's starting location is marked by crp_digest_start. Instead of using the CRD_F_ENCRYPT flag to determine the direction of the operation, cryptop now includes an 'op' field defining the operation to perform. For digests I've added a new VERIFY digest mode which assumes a digest is present in the input and fails the request with EBADMSG if it doesn't match the internally-computed digest. GCM and CCM already assumed this, and the new AEAD mode requires this for decryption. The new ETA mode now also requires this for decryption, so IPsec and GELI no longer do their own authentication verification. Simple DIGEST operations can also do this, though there are no in-tree consumers. To eventually support some refcounting to close races, the session cookie is now passed to crypto_getop() and clients should no longer set crp_sesssion directly. - Assymteric crypto operation structures should be allocated via crypto_getkreq() and freed via crypto_freekreq(). This permits the crypto layer to track open asym requests and close races with a driver trying to unregister while asym requests are in flight. - crypto_copyback, crypto_copydata, crypto_apply, and crypto_contiguous_subsegment now accept the 'crp' object as the first parameter instead of individual members. This makes it easier to deal with different buffer types in the future as well as separate input and output buffers. It's also simpler for driver writers to use. - bus_dmamap_load_crp() loads a DMA mapping for a crypto buffer. This understands the various types of buffers so that drivers that use DMA do not have to be aware of different buffer types. - Helper routines now exist to build an auth context for HMAC IPAD and OPAD. This reduces some duplicated work among drivers. - Key buffers are now treated as const throughout the framework and in device drivers. However, session key buffers provided when a session is created are expected to remain alive for the duration of the session. - GCM and CCM sessions now only specify a cipher algorithm and a cipher key. The redundant auth information is not needed or used. - For cryptosoft, split up the code a bit such that the 'process' callback now invokes a function pointer in the session. This function pointer is set based on the mode (in effect) though it simplifies a few edge cases that would otherwise be in the switch in 'process'. It does split up GCM vs CCM which I think is more readable even if there is some duplication. - I changed /dev/crypto to support GMAC requests using CRYPTO_AES_NIST_GMAC as an auth algorithm and updated cryptocheck to work with it. - Combined cipher and auth sessions via /dev/crypto now always use ETA mode. The COP_F_CIPHER_FIRST flag is now a no-op that is ignored. This was actually documented as being true in crypto(4) before, but the code had not implemented this before I added the CIPHER_FIRST flag. - I have not yet updated /dev/crypto to be aware of explicit modes for sessions. I will probably do that at some point in the future as well as teach it about IV/nonce and tag lengths for AEAD so we can support all of the NIST KAT tests for GCM and CCM. - I've split up the exising crypto.9 manpage into several pages of which many are written from scratch. - I have converted all drivers and consumers in the tree and verified that they compile, but I have not tested all of them. I have tested the following drivers: - cryptosoft - aesni (AES only) - blake2 - ccr and the following consumers: - cryptodev - IPsec - ktls_ocf - GELI (lightly) I have not tested the following: - ccp - aesni with sha - hifn - kgssapi_krb5 - ubsec - padlock - safe - armv8_crypto (aarch64) - glxsb (i386) - sec (ppc) - cesa (armv7) - cryptocteon (mips64) - nlmsec (mips64) Discussed with: cem Relnotes: yes Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D23677
Notes
Notes: svn path=/head/; revision=359374
-rw-r--r--ObsoleteFiles.inc5
-rw-r--r--share/man/man4/crypto.422
-rw-r--r--share/man/man7/crypto.732
-rw-r--r--share/man/man9/Makefile45
-rw-r--r--share/man/man9/bus_dma.921
-rw-r--r--share/man/man9/crypto.9753
-rw-r--r--share/man/man9/crypto_asym.9178
-rw-r--r--share/man/man9/crypto_driver.9392
-rw-r--r--share/man/man9/crypto_request.9419
-rw-r--r--share/man/man9/crypto_session.9245
-rw-r--r--sys/crypto/aesni/aesni.c834
-rw-r--r--sys/crypto/aesni/aesni.h14
-rw-r--r--sys/crypto/aesni/aesni_wrap.c58
-rw-r--r--sys/crypto/armv8/armv8_crypto.c244
-rw-r--r--sys/crypto/blake2/blake2_cryptodev.c215
-rw-r--r--sys/crypto/ccp/ccp.c468
-rw-r--r--sys/crypto/ccp/ccp.h19
-rw-r--r--sys/crypto/ccp/ccp_hardware.c234
-rw-r--r--sys/crypto/via/padlock.c140
-rw-r--r--sys/crypto/via/padlock.h10
-rw-r--r--sys/crypto/via/padlock_cipher.c124
-rw-r--r--sys/crypto/via/padlock_hash.c118
-rw-r--r--sys/dev/cesa/cesa.c599
-rw-r--r--sys/dev/cesa/cesa.h5
-rw-r--r--sys/dev/cxgbe/adapter.h2
-rw-r--r--sys/dev/cxgbe/crypto/t4_crypto.c1299
-rw-r--r--sys/dev/cxgbe/crypto/t4_keyctx.c40
-rw-r--r--sys/dev/cxgbe/tom/t4_tls.c4
-rw-r--r--sys/dev/glxsb/glxsb.c268
-rw-r--r--sys/dev/glxsb/glxsb.h6
-rw-r--r--sys/dev/glxsb/glxsb_hash.c100
-rw-r--r--sys/dev/hifn/hifn7751.c622
-rw-r--r--sys/dev/hifn/hifn7751var.h15
-rw-r--r--sys/dev/safe/safe.c915
-rw-r--r--sys/dev/safe/safevar.h12
-rw-r--r--sys/dev/sec/sec.c540
-rw-r--r--sys/dev/sec/sec.h18
-rw-r--r--sys/dev/ubsec/ubsec.c585
-rw-r--r--sys/dev/ubsec/ubsecvar.h10
-rw-r--r--sys/geom/eli/g_eli.c49
-rw-r--r--sys/geom/eli/g_eli.h20
-rw-r--r--sys/geom/eli/g_eli_crypto.c50
-rw-r--r--sys/geom/eli/g_eli_integrity.c183
-rw-r--r--sys/geom/eli/g_eli_privacy.c85
-rw-r--r--sys/kern/subr_bus_dma.c51
-rw-r--r--sys/kern/uipc_ktls.c3
-rw-r--r--sys/kgssapi/krb5/kcrypto_aes.c111
-rw-r--r--sys/kgssapi/krb5/kcrypto_des.c55
-rw-r--r--sys/kgssapi/krb5/kcrypto_des3.c110
-rw-r--r--sys/mips/cavium/cryptocteon/cavium_crypto.c180
-rw-r--r--sys/mips/cavium/cryptocteon/cryptocteon.c449
-rw-r--r--sys/mips/cavium/cryptocteon/cryptocteonvar.h12
-rw-r--r--sys/mips/nlm/dev/sec/nlmrsa.c16
-rw-r--r--sys/mips/nlm/dev/sec/nlmsec.c438
-rw-r--r--sys/mips/nlm/dev/sec/nlmseclib.c125
-rw-r--r--sys/mips/nlm/dev/sec/nlmseclib.h22
-rw-r--r--sys/mips/nlm/hal/nlmsaelib.h4
-rw-r--r--sys/netipsec/xform.h5
-rw-r--r--sys/netipsec/xform_ah.c78
-rw-r--r--sys/netipsec/xform_esp.c173
-rw-r--r--sys/netipsec/xform_ipcomp.c79
-rw-r--r--sys/opencrypto/criov.c85
-rw-r--r--sys/opencrypto/crypto.c1309
-rw-r--r--sys/opencrypto/cryptodev.c559
-rw-r--r--sys/opencrypto/cryptodev.h150
-rw-r--r--sys/opencrypto/cryptodev_if.m118
-rw-r--r--sys/opencrypto/cryptosoft.c1703
-rw-r--r--sys/opencrypto/cryptosoft.h71
-rw-r--r--sys/opencrypto/ktls_ocf.c95
-rw-r--r--sys/opencrypto/xform_gmac.c6
-rw-r--r--sys/sys/bus_dma.h8
-rw-r--r--sys/sys/param.h2
-rw-r--r--tests/sys/opencrypto/cryptodev.py5
-rw-r--r--tests/sys/opencrypto/cryptodevh.py3
-rw-r--r--tests/sys/opencrypto/cryptotest.py8
-rw-r--r--tools/tools/crypto/cryptocheck.c196
76 files changed, 8321 insertions, 7925 deletions
diff --git a/ObsoleteFiles.inc b/ObsoleteFiles.inc
index 63df5f958266..3474068a9979 100644
--- a/ObsoleteFiles.inc
+++ b/ObsoleteFiles.inc
@@ -36,6 +36,11 @@
# xargs -n1 | sort | uniq -d;
# done
+# 20200327: OCF refactoring
+OLD_FILES+=usr/share/man/man9/crypto_find_driver.9
+OLD_FILES+=usr/share/man/man9/crypto_register.9
+OLD_FILES+=usr/share/man/man9/crypto_unregister.9
+
# 20200323: INTERNALLIB don't install headers anymore
OLD_FILES+=usr/include/libelftc.h
OLD_FILES+=usr/include/libifconfig.h
diff --git a/share/man/man4/crypto.4 b/share/man/man4/crypto.4
index c28a80fbe5e4..60341af89659 100644
--- a/share/man/man4/crypto.4
+++ b/share/man/man4/crypto.4
@@ -60,7 +60,7 @@
.\"
.\" $FreeBSD$
.\"
-.Dd December 17, 2019
+.Dd March 27, 2020
.Dt CRYPTO 4
.Os
.Sh NAME
@@ -156,7 +156,7 @@ file desriptor.
The symmetric-key operation mode provides a context-based API
to traditional symmetric-key encryption (or privacy) algorithms,
or to keyed and unkeyed one-way hash (HMAC and MAC) algorithms.
-The symmetric-key mode also permits fused operation,
+The symmetric-key mode also permits encrypt-then-authenticate fused operation,
where the hardware performs both a privacy algorithm and an integrity-check
algorithm in a single pass over the data: either a fused
encrypt/HMAC-generate operation, or a fused HMAC-verify/decrypt operation.
@@ -314,16 +314,14 @@ supplies the length of the input buffer; the fields
.Fa cr_op-\*[Gt]iv
supply the addresses of the input buffer, output buffer,
one-way hash, and initialization vector, respectively.
-If a session is using both a privacy algorithm and a hash algorithm,
-the request will generate a hash of the input buffer before
-generating the output buffer by default.
-If the
-.Dv COP_F_CIPHER_FIRST
-flag is included in the
-.Fa cr_op-\*[Gt]flags
-field,
-then the request will generate a hash of the output buffer after
-executing the privacy algorithm.
+.Pp
+If a session is using either fused encrypt-then-authenticate or
+an AEAD algorithm,
+decryption operations require the associated hash as an input.
+If the hash is incorrect, the
+operation will fail with
+.Dv EBADMSG
+and the output buffer will remain unchanged.
.It Dv CIOCCRYPTAEAD Fa struct crypt_aead *cr_aead
.Bd -literal
struct crypt_aead {
diff --git a/share/man/man7/crypto.7 b/share/man/man7/crypto.7
index 0bf351aab32c..747d95915b89 100644
--- a/share/man/man7/crypto.7
+++ b/share/man/man7/crypto.7
@@ -27,7 +27,7 @@
.\"
.\" $FreeBSD$
.\"
-.Dd January 2, 2015
+.Dd March 27, 2020
.Dt CRYPTO 7
.Os
.Sh NAME
@@ -68,19 +68,13 @@ This algorithm implements Cipher-block chaining.
.El
.Pp
This algorithm implements Galois/Counter Mode.
-This is the cipher part of an AEAD
+This cipher uses AEAD
.Pq Authenticated Encryption with Associated Data
mode.
-This requires use of the use of a proper authentication mode, one of
-.Dv CRYPTO_AES_128_NIST_GMAC ,
-.Dv CRYPTO_AES_192_NIST_GMAC
-or
-.Dv CRYPTO_AES_256_NIST_GMAC ,
-that corresponds with the number of bits in the key that you are using.
.Pp
-The associated data (if any) must be provided by the authentication mode op.
-The authentication tag will be read/written from/to the offset crd_inject
-specified in the descriptor for the authentication mode.
+The authentication tag will be read/written from/to the offset
+.Va crp_digest_start
+specified in the request.
.Pp
Note: You must provide an IV on every call.
.It Dv CRYPTO_AES_ICM
@@ -118,22 +112,6 @@ as defined in NIST SP 800-38E.
NOTE: The ciphertext stealing part is not implemented which is why this cipher
is listed as having a block size of 16 instead of 1.
.El
-.Pp
-Authentication algorithms:
-.Bl -tag -width ".Dv CRYPTO_AES_256_NIST_GMAC"
-.It CRYPTO_AES_128_NIST_GMAC
-See
-.Dv CRYPTO_AES_NIST_GCM_16
-in the cipher mode section.
-.It CRYPTO_AES_192_NIST_GMAC
-See
-.Dv CRYPTO_AES_NIST_GCM_16
-in the cipher mode section.
-.It CRYPTO_AES_256_NIST_GMAC
-See
-.Dv CRYPTO_AES_NIST_GCM_16
-in the cipher mode section.
-.El
.Sh SEE ALSO
.Xr crypto 4 ,
.Xr crypto 9
diff --git a/share/man/man9/Makefile b/share/man/man9/Makefile
index 5f079603a26c..bfdef71f8280 100644
--- a/share/man/man9/Makefile
+++ b/share/man/man9/Makefile
@@ -71,6 +71,10 @@ MAN= accept_filter.9 \
cr_seeothergids.9 \
cr_seeotheruids.9 \
crypto.9 \
+ crypto_asym.9 \
+ crypto_driver.9 \
+ crypto_request.9 \
+ crypto_session.9 \
CTASSERT.9 \
DB_COMMAND.9 \
DECLARE_GEOM_CLASS.9 \
@@ -889,20 +893,33 @@ MLINKS+=cpuset.9 CPUSET_T_INITIALIZER.9 \
cpuset.9 CPU_COPY_STORE_REL.9
MLINKS+=critical_enter.9 critical.9 \
critical_enter.9 critical_exit.9
-MLINKS+=crypto.9 crypto_dispatch.9 \
- crypto.9 crypto_done.9 \
- crypto.9 crypto_freereq.9 \
- crypto.9 crypto_freesession.9 \
- crypto.9 crypto_get_driverid.9 \
- crypto.9 crypto_getreq.9 \
- crypto.9 crypto_kdispatch.9 \
- crypto.9 crypto_kdone.9 \
- crypto.9 crypto_kregister.9 \
- crypto.9 crypto_newsession.9 \
- crypto.9 crypto_register.9 \
- crypto.9 crypto_unblock.9 \
- crypto.9 crypto_unregister.9 \
- crypto.9 crypto_unregister_all.9
+MLINKS+=crypto_asym.9 crypto_kdispatch.9 \
+ crypto_asym.9 crypto_kdone.9 \
+ crypto_asym.9 crypto_kregister.9 \
+ crypto_asym.9 CRYPTODEV_KPROCESS.9
+MLINKS+=crypto_driver.9 crypto_apply.9 \
+ crypto_driver.9 crypto_contiguous_segment.9 \
+ crypto_driver.9 crypto_copyback.9 \
+ crypto_driver.9 crypto_copydata.9 \
+ crypto_driver.9 crypto_done.9 \
+ crypto_driver.9 crypto_get_driverid.9 \
+ crypto_driver.9 crypto_get_driver_session.9 \
+ crypto_driver.9 crypto_unblock.9 \
+ crypto_driver.9 crypto_unregister_all.9 \
+ crypto_driver.9 CRYPTODEV_FREESESSION.9 \
+ crypto_driver.9 CRYPTODEV_NEWSESSION.9 \
+ crypto_driver.9 CRYPTODEV_PROBESESSION.9 \
+ crypto_driver.9 CRYPTODEV_PROCESS.9 \
+ crypto_driver.9 hmac_init_ipad.9 \
+ crypto_driver.9 hmac_init_opad.9
+MLINKS+=crypto_request.9 crypto_dispatch.9 \
+ crypto_request.9 crypto_freereq.9 \
+ crypto_request.9 crypto_getreq.9
+MLINKS+=crypto_session.9 crypto_auth_hash.9 \
+ crypto_session.9 crypto_cipher.9 \
+ crypto_session.9 crypto_get_params.9 \
+ crypto_session.9 crypto_newsession.9 \
+ crypto_session.9 crypto_freesession.9
MLINKS+=DB_COMMAND.9 DB_SHOW_ALL_COMMAND.9 \
DB_COMMAND.9 DB_SHOW_COMMAND.9
MLINKS+=DECLARE_MODULE.9 DECLARE_MODULE_TIED.9
diff --git a/share/man/man9/bus_dma.9 b/share/man/man9/bus_dma.9
index b47cb13e1689..110a227a09c7 100644
--- a/share/man/man9/bus_dma.9
+++ b/share/man/man9/bus_dma.9
@@ -53,7 +53,7 @@
.\" $FreeBSD$
.\" $NetBSD: bus_dma.9,v 1.25 2002/10/14 13:43:16 wiz Exp $
.\"
-.Dd August 11, 2018
+.Dd March 27, 2020
.Dt BUS_DMA 9
.Os
.Sh NAME
@@ -68,6 +68,7 @@
.Nm bus_dmamap_load ,
.Nm bus_dmamap_load_bio ,
.Nm bus_dmamap_load_ccb ,
+.Nm bus_dmamap_load_crp ,
.Nm bus_dmamap_load_mbuf ,
.Nm bus_dmamap_load_mbuf_sg ,
.Nm bus_dmamap_load_uio ,
@@ -118,6 +119,10 @@
"union ccb *ccb" "bus_dmamap_callback_t *callback" "void *callback_arg" \
"int flags"
.Ft int
+.Fn bus_dmamap_load_crp "bus_dma_tag_t dmat" "bus_dmamap_t map" \
+"struct crypto *crp" "bus_dmamap_callback_t *callback" "void *callback_arg" \
+"int flags"
+.Ft int
.Fn bus_dmamap_load_mbuf "bus_dma_tag_t dmat" "bus_dmamap_t map" \
"struct mbuf *mbuf" "bus_dmamap_callback2_t *callback" "void *callback_arg" \
"int flags"
@@ -387,9 +392,10 @@ the load of a
.Vt bus_dmamap_t
via
.Fn bus_dmamap_load ,
-.Fn bus_dmamap_load_bio
+.Fn bus_dmamap_load_bio ,
+.Fn bus_dmamap_load_ccb ,
or
-.Fn bus_dmamap_load_ccb .
+.Fn bus_dmamap_load_crp .
Callbacks are of the format:
.Bl -tag -width indent
.It Ft void
@@ -879,6 +885,15 @@ XPT_CONT_TARGET_IO
.It
XPT_SCSI_IO
.El
+.It Fn bus_dmamap_load_crp "dmat" "map" "crp" "callback" "callback_arg" "flags"
+This is a variation of
+.Fn bus_dmamap_load
+which maps buffers pointed to by
+.Fa crp
+for DMA transfers.
+The
+.Dv BUS_DMA_NOWAIT
+flag is implied, thus no callback deferral will happen.
.It Fn bus_dmamap_load_mbuf "dmat" "map" "mbuf" "callback2" "callback_arg" \
"flags"
This is a variation of
diff --git a/share/man/man9/crypto.9 b/share/man/man9/crypto.9
index 3f312f2fb624..67afe01d2f68 100644
--- a/share/man/man9/crypto.9
+++ b/share/man/man9/crypto.9
@@ -17,7 +17,7 @@
.\"
.\" $FreeBSD$
.\"
-.Dd December 17, 2019
+.Dd March 27, 2020
.Dt CRYPTO 9
.Os
.Sh NAME
@@ -25,120 +25,50 @@
.Nd API for cryptographic services in the kernel
.Sh SYNOPSIS
.In opencrypto/cryptodev.h
-.Ft int32_t
-.Fn crypto_get_driverid "device_t dev" "size_t session_size" "int flags"
-.Ft int
-.Fn crypto_register "uint32_t driverid" "int alg" "uint16_t maxoplen" "uint32_t flags"
-.Ft int
-.Fn crypto_kregister "uint32_t driverid" "int kalg" "uint32_t flags"
-.Ft int
-.Fn crypto_unregister "uint32_t driverid" "int alg"
-.Ft int
-.Fn crypto_unregister_all "uint32_t driverid"
-.Ft void
-.Fn crypto_done "struct cryptop *crp"
-.Ft void
-.Fn crypto_kdone "struct cryptkop *krp"
-.Ft int
-.Fn crypto_find_driver "const char *match"
-.Ft int
-.Fn crypto_newsession "crypto_session_t *cses" "struct cryptoini *cri" "int crid"
-.Ft int
-.Fn crypto_freesession "crypto_session_t cses"
-.Ft int
-.Fn crypto_dispatch "struct cryptop *crp"
-.Ft int
-.Fn crypto_kdispatch "struct cryptkop *krp"
-.Ft int
-.Fn crypto_unblock "uint32_t driverid" "int what"
-.Ft "struct cryptop *"
-.Fn crypto_getreq "int num"
-.Ft void
-.Fn crypto_freereq "struct cryptop *crp"
-.Bd -literal
-#define CRYPTO_SYMQ 0x1
-#define CRYPTO_ASYMQ 0x2
-
-#define EALG_MAX_BLOCK_LEN 16
-
-struct cryptoini {
- int cri_alg;
- int cri_klen;
- int cri_mlen;
- caddr_t cri_key;
- uint8_t cri_iv[EALG_MAX_BLOCK_LEN];
- struct cryptoini *cri_next;
-};
-
-struct cryptodesc {
- int crd_skip;
- int crd_len;
- int crd_inject;
- int crd_flags;
- struct cryptoini CRD_INI;
-#define crd_iv CRD_INI.cri_iv
-#define crd_key CRD_INI.cri_key
-#define crd_alg CRD_INI.cri_alg
-#define crd_klen CRD_INI.cri_klen
- struct cryptodesc *crd_next;
-};
-
-struct cryptop {
- TAILQ_ENTRY(cryptop) crp_next;
- crypto_session_t crp_session;
- int crp_ilen;
- int crp_olen;
- int crp_etype;
- int crp_flags;
- caddr_t crp_buf;
- caddr_t crp_opaque;
- struct cryptodesc *crp_desc;
- int (*crp_callback) (struct cryptop *);
- caddr_t crp_mac;
-};
-
-struct crparam {
- caddr_t crp_p;
- u_int crp_nbits;
-};
-
-#define CRK_MAXPARAM 8
-
-struct cryptkop {
- TAILQ_ENTRY(cryptkop) krp_next;
- u_int krp_op; /* ie. CRK_MOD_EXP or other */
- u_int krp_status; /* return status */
- u_short krp_iparams; /* # of input parameters */
- u_short krp_oparams; /* # of output parameters */
- uint32_t krp_hid;
- struct crparam krp_param[CRK_MAXPARAM];
- int (*krp_callback)(struct cryptkop *);
-};
-.Ed
.Sh DESCRIPTION
.Nm
-is a framework for drivers of cryptographic hardware to register with
-the kernel so
-.Dq consumers
-(other kernel subsystems, and
-users through the
+is a framework for in-kernel cryptography.
+It permits in-kernel consumers to encrypt and decrypt data
+and also enables userland applications to use cryptographic hardware
+through the
.Pa /dev/crypto
-device) are able to make use of it.
-Drivers register with the framework the algorithms they support,
-and provide entry points (functions) the framework may call to
-establish, use, and tear down sessions.
-Sessions are used to cache cryptographic information in a particular driver
-(or associated hardware), so initialization is not needed with every request.
-Consumers of cryptographic services pass a set of
-descriptors that instruct the framework (and the drivers registered
-with it) of the operations that should be applied on the data (more
-than one cryptographic operation can be requested).
-.Pp
-Keying operations are supported as well.
-Unlike the symmetric operators described above,
-these sessionless commands perform mathematical operations using
-input and output parameters.
+device.
.Pp
+.Nm
+supports two modes of operation:
+one mode for symmetric-keyed cryptographic requests and digest,
+and a second mode for asymmetric-key requests and modular arithmetic.
+.Ss Symmetric-Key Mode
+Symmetric-key operations include encryption and decryption operations
+using block and stream ciphers as well as computation and verification
+of message authentication codes (MACs).
+In this mode,
+consumers allocate sessions to describe a transform as discussed in
+.Xr crypto_session 9 .
+Consumers then allocate request objects to describe each transformation
+such as encrypting a network packet or decrypting a disk sector.
+Requests are described in
+.Xr crypto_request 9 .
+.Pp
+Device drivers are responsible for processing requests submitted by
+consumers.
+.Xr crypto_driver 9
+describes the interfaces drivers use to register with the framework,
+helper routines the framework provides to faciliate request processing,
+and the interfaces drivers are required to provide.
+.Ss Asymmetric-Key Mode
+Assymteric-key operations do not use sessions.
+Instead,
+these operations perform individual mathematical operations using a set
+of input and output parameters.
+These operations are described in
+.Xr crypto_asym 9 .
+Drivers that support asymmetric operations use additional interfaces
+described in
+.Xr crypto_asym 9
+in addition to the base interfaces described in
+.Xr crypto_driver 9 .
+.Ss Callbacks
Since the consumers may not be associated with a process, drivers may
not
.Xr sleep 9 .
@@ -148,88 +78,38 @@ to notify a consumer that a request has been completed (the
callback is specified by the consumer on a per-request basis).
The callback is invoked by the framework whether the request was
successfully completed or not.
-An error indication is provided in the latter case.
-A specific error code,
+Errors are reported to the callback function.
+.Pp
+Session initialization does not use callbacks and returns errors
+synchronously.
+.Ss Session Migration
+For symmetric-key operations,
+a specific error code,
.Er EAGAIN ,
is used to indicate that a session handle has changed and that the
request may be re-submitted immediately with the new session.
-Errors are only returned to the invoking function if not
-enough information to call the callback is available (meaning, there
-was a fatal error in verifying the arguments).
-For session initialization and teardown no callback mechanism is used.
-.Pp
-The
-.Fn crypto_find_driver
-returns the driver id of the device whose name matches
-.Fa match .
-.Fa match
-can either be the exact name of a device including the unit
-or the driver name without a unit.
-In the latter case,
-the id of the first device with the matching driver name is returned.
-If no matching device is found,
-the value -1 is returned.
-.Pp
-The
-.Fn crypto_newsession
-routine is called by consumers of cryptographic services (such as the
-.Xr ipsec 4
-stack) that wish to establish a new session with the framework.
-The
-.Fa cri
-argument points to a
-.Vt cryptoini
-structure containing all the necessary information for
-the driver to establish the session.
-The
-.Fa crid
-argument is either a specific driver id or a bitmask of flags.
-The flags are
-.Dv CRYPTOCAP_F_HARDWARE ,
-to select hardware devices,
-or
-.Dv CRYPTOCAP_F_SOFTWARE ,
-to select software devices.
-If both are specified, hardware devices are preferred over software
-devices.
-On success, the opaque session handle of the new session will be stored in
-.Fa *cses .
-The
-.Vt cryptoini
-structure pointed to by
-.Fa cri
-contains these fields:
-.Bl -tag -width ".Va cri_next"
-.It Va cri_alg
-An algorithm identifier.
-Currently supported algorithms are:
-.Pp
-.Bl -tag -width ".Dv CRYPTO_RIPEMD160_HMAC" -compact
-.It Dv CRYPTO_AES_128_NIST_GMAC
-.It Dv CRYPTO_AES_192_NIST_GMAC
-.It Dv CRYPTO_AES_256_NIST_GMAC
-.It Dv CRYPTO_AES_CBC
-.It Dv CRYPTO_AES_CCM_16
+The consumer should update its saved copy of the session handle
+to the value of
+.Fa crp_session
+so that future requests use the new session.
+.Ss Supported Algorithms
+More details on some algorithms may be found in
+.Xr crypto 7 .
+These algorithms are used for symmetric-mode operations.
+Asymmetric-mode operations support operations described in
+.Xr crypto_asym 9 .
+.Pp
+The following authentication algorithms are supported:
+.Pp
+.Bl -tag -offset indent -width CRYPTO_AES_CCM_CBC_MAC -compact
.It Dv CRYPTO_AES_CCM_CBC_MAC
-.It Dv CRYPTO_AES_ICM
-.It Dv CRYPTO_AES_NIST_GCM_16
.It Dv CRYPTO_AES_NIST_GMAC
-.It Dv CRYPTO_AES_XTS
-.It Dv CRYPTO_ARC4
.It Dv CRYPTO_BLAKE2B
.It Dv CRYPTO_BLAKE2S
-.It Dv CRYPTO_BLF_CBC
-.It Dv CRYPTO_CAMELLIA_CBC
-.It Dv CRYPTO_CAST_CBC
-.It Dv CRYPTO_CHACHA20
-.It Dv CRYPTO_DEFLATE_COMP
-.It Dv CRYPTO_DES_CBC
-.It Dv CRYPTO_3DES_CBC
.It Dv CRYPTO_MD5
.It Dv CRYPTO_MD5_HMAC
.It Dv CRYPTO_MD5_KPDK
.It Dv CRYPTO_NULL_HMAC
-.It Dv CRYPTO_NULL_CBC
.It Dv CRYPTO_POLY1305
.It Dv CRYPTO_RIPEMD160
.It Dv CRYPTO_RIPEMD160_HMAC
@@ -244,488 +124,38 @@ Currently supported algorithms are:
.It Dv CRYPTO_SHA2_384_HMAC
.It Dv CRYPTO_SHA2_512
.It Dv CRYPTO_SHA2_512_HMAC
-.It Dv CRYPTO_SKIPJACK_CBC
-.El
-.It Va cri_klen
-For variable-size key algorithms, the length of the key in bits.
-.It Va cri_mlen
-If non-zero, truncate the calculated hash to this many bytes.
-.It Va cri_key
-The key to be used.
-.It Va cri_iv
-An explicit initialization vector if it does not prefix
-the data.
-This field is ignored during initialization
-.Pq Nm crypto_newsession .
-If no IV is explicitly passed (see below on details), a random IV is used
-by the device driver processing the request.
-.It Va cri_next
-Pointer to another
-.Vt cryptoini
-structure.
-This is used to establish dual-algorithm sessions, such as combining a
-cipher with a MAC.
.El
.Pp
-The
-.Vt cryptoini
-structure and its contents will not be modified or referenced by the
-framework or any cryptographic drivers.
-The memory associated with
-.Fa cri
-can be released once
-.Fn crypto_newsession
-returns.
+The following encryption algorithms are supported:
.Pp
-.Fn crypto_freesession
-is called with the session handle returned by
-.Fn crypto_newsession
-to free the session.
-.Pp
-.Fn crypto_dispatch
-is called to process a request.
-The various fields in the
-.Vt cryptop
-structure are:
-.Bl -tag -width ".Va crp_callback"
-.It Va crp_session
-The session handle.
-.It Va crp_ilen
-The total length in bytes of the buffer to be processed.
-.It Va crp_olen
-On return, contains the total length of the result.
-For symmetric crypto operations, this will be the same as the input length.
-This will be used if the framework needs to allocate a new
-buffer for the result (or for re-formatting the input).
-.It Va crp_callback
-Callback routine invoked when a request is completed via
-.Fn crypto_done .
-The callback routine should inspect the
-.Va crp_etype
-to determine if the request was successfully completed.
-.It Va crp_etype
-The error type, if any errors were encountered, or zero if
-the request was successfully processed.
-If the
-.Er EAGAIN
-error code is returned, the session handle has changed (and has been recorded
-in the
-.Va crp_session
-field).
-The consumer should record the new session handle and use it in all subsequent
-requests.
-In this case, the request may be re-submitted immediately.
-This mechanism is used by the framework to perform
-session migration (move a session from one driver to another, because
-of availability, performance, or other considerations).
-.Pp
-This field is only valid in the context of the callback routine specified by
-.Va crp_callback .
-Errors are returned to the invoker of
-.Fn crypto_process
-only when enough information is not present to call the callback
-routine (i.e., if the pointer passed is
-.Dv NULL
-or if no callback routine was specified).
-.It Va crp_flags
-A bitmask of flags associated with this request.
-Currently defined flags are:
-.Bl -tag -width ".Dv CRYPTO_F_CBIFSYNC"
-.It Dv CRYPTO_F_IMBUF
-The buffer is an mbuf chain pointed to by
-.Va crp_mbuf .
-.It Dv CRYPTO_F_IOV
-The buffer is a
-.Vt uio
-structure pointed to by
-.Va crp_uio .
-.It Dv CRYPTO_F_BATCH
-Batch operation if possible.
-.It Dv CRYPTO_F_CBIMM
-Do callback immediately instead of doing it from a dedicated kernel thread.
-.It Dv CRYPTO_F_DONE
-Operation completed.
-.It Dv CRYPTO_F_CBIFSYNC
-Do callback immediately if operation is synchronous (that the driver
-specified the
-.Dv CRYPTOCAP_F_SYNC
-flag).
-.It Dv CRYPTO_F_ASYNC
-Try to do the crypto operation in a pool of workers
-if the operation is synchronous (that is, if the driver specified the
-.Dv CRYPTOCAP_F_SYNC
-flag).
-It aims to speed up processing by dispatching crypto operations
-on different processors.
-.It Dv CRYPTO_F_ASYNC_KEEPORDER
-Dispatch callbacks in the same order they are posted.
-Only relevant if the
-.Dv CRYPTO_F_ASYNC
-flag is set and if the operation is synchronous.
-.El
-.It Va crp_buf
-Data buffer unless
-.Dv CRYPTO_F_IMBUF
-or
-.Dv CRYPTO_F_IOV
-is set in
-.Va crp_flags .
-The length in bytes is set in
-.Va crp_ilen .
-.It Va crp_mbuf
-Data buffer mbuf chain when
-.Dv CRYPTO_F_IMBUF
-is set in
-.Va crp_flags .
-.It Va crp_uio
-.Vt struct uio
-data buffer when
-.Dv CRYPTO_F_IOV
-is set in
-.Va crp_flags .
-.It Va crp_opaque
-Cookie passed through the crypto framework untouched.
-It is
-intended for the invoking application's use.
-.It Va crp_desc
-A linked list of descriptors.
-Each descriptor provides
-information about what type of cryptographic operation should be done
-on the input buffer.
-The various fields are:
-.Bl -tag -width ".Va crd_inject"
-.It Va crd_iv
-When the flag
-.Dv CRD_F_IV_EXPLICIT
-is set, this field contains the IV.
-.It Va crd_key
-When the
-.Dv CRD_F_KEY_EXPLICIT
-flag is set, the
-.Va crd_key
-points to a buffer with encryption or authentication key.
-.It Va crd_alg
-An algorithm to use.
-Must be the same as the one given at newsession time.
-.It Va crd_klen
-The
-.Va crd_key
-key length.
-.It Va crd_skip
-The offset in the input buffer where processing should start.
-.It Va crd_len
-How many bytes, after
-.Va crd_skip ,
-should be processed.
-.It Va crd_inject
-The
-.Va crd_inject
-field specifies an offset in bytes from the beginning of the buffer.
-For encryption algorithms, this may be where the IV will be inserted
-when encrypting or where the IV may be found for
-decryption (subject to
-.Va crd_flags ) .
-For MAC algorithms, this is where the result of the keyed hash will be
-inserted.
-.It Va crd_flags
-The following flags are defined:
-.Bl -tag -width 3n
-.It Dv CRD_F_ENCRYPT
-For encryption algorithms, this bit is set when encryption is required
-(when not set, decryption is performed).
-.It Dv CRD_F_IV_PRESENT
-.\" This flag name has nothing to do w/ it's behavior, fix the name.
-For encryption, if this bit is not set the IV used to encrypt the packet
-will be written at the location pointed to by
-.Va crd_inject .
-The IV length is assumed to be equal to the blocksize of the
-encryption algorithm.
-For encryption, if this bit is set, nothing is done.
-For decryption, this flag has no meaning.
-Applications that do special
-.Dq "IV cooking" ,
-such as the half-IV mode in
-.Xr ipsec 4 ,
-can use this flag to indicate that the IV should not be written on the packet.
-This flag is typically used in conjunction with the
-.Dv CRD_F_IV_EXPLICIT
-flag.
-.It Dv CRD_F_IV_EXPLICIT
-This bit is set when the IV is explicitly
-provided by the consumer in the
-.Va crd_iv
-field.
-Otherwise, for encryption operations the IV is provided for by
-the driver used to perform the operation, whereas for decryption
-operations the offset of the IV is provided by the
-.Va crd_inject
-field.
-This flag is typically used when the IV is calculated
-.Dq "on the fly"
-by the consumer, and does not precede the data.
-.It Dv CRD_F_KEY_EXPLICIT
-For encryption and authentication (MAC) algorithms, this bit is set when the key
-is explicitly provided by the consumer in the
-.Va crd_key
-field for the given operation.
-Otherwise, the key is taken at newsession time from the
-.Va cri_key
-field.
-As calculating the key schedule may take a while, it is recommended that often
-used keys are given their own session.
-.It Dv CRD_F_COMP
-For compression algorithms, this bit is set when compression is required (when
-not set, decompression is performed).
-.El
-.It Va CRD_INI
-This
-.Vt cryptoini
-structure will not be modified by the framework or the device drivers.
-Since this information accompanies every cryptographic
-operation request, drivers may re-initialize state on-demand
-(typically an expensive operation).
-Furthermore, the cryptographic
-framework may re-route requests as a result of full queues or hardware
-failure, as described above.
-.It Va crd_next
-Point to the next descriptor.
-Linked operations are useful in protocols such as
-.Xr ipsec 4 ,
-where multiple cryptographic transforms may be applied on the same
-block of data.
-.El
+.Bl -tag -offset indent -width CRYPTO_CAMELLIA_CBC -compact
+.It Dv CRYPTO_AES_CBC
+.It Dv CRYPTO_AES_ICM
+.It Dv CRYPTO_AES_XTS
+.It Dv CRYPTO_ARC4
+.It Dv CRYPTO_BLF_CBC
+.It Dv CRYPTO_CAMELLIA_CBC
+.It Dv CRYPTO_CAST_CBC
+.It Dv CRYPTO_CHACHA20
+.It Dv CRYPTO_DES_CBC
+.It Dv CRYPTO_3DES_CBC
+.It Dv CRYPTO_NULL_CBC
+.It Dv CRYPTO_SKIPJACK_CBC
.El
.Pp
-.Fn crypto_getreq
-allocates a
-.Vt cryptop
-structure with a linked list of
-.Fa num
-.Vt cryptodesc
-structures.
-.Pp
-.Fn crypto_freereq
-deallocates a structure
-.Vt cryptop
-and any
-.Vt cryptodesc
-structures linked to it.
-Note that it is the responsibility of the
-callback routine to do the necessary cleanups associated with the
-opaque field in the
-.Vt cryptop
-structure.
+The following authenticated encryption with additional data (AEAD)
+algorithms are supported:
.Pp
-.Fn crypto_kdispatch
-is called to perform a keying operation.
-The various fields in the
-.Vt cryptkop
-structure are:
-.Bl -tag -width ".Va krp_callback"
-.It Va krp_op
-Operation code, such as
-.Dv CRK_MOD_EXP .
-.It Va krp_status
-Return code.
-This
-.Va errno Ns -style
-variable indicates whether lower level reasons
-for operation failure.
-.It Va krp_iparams
-Number of input parameters to the specified operation.
-Note that each operation has a (typically hardwired) number of such parameters.
-.It Va krp_oparams
-Number of output parameters from the specified operation.
-Note that each operation has a (typically hardwired) number of such parameters.
-.It Va krp_kvp
-An array of kernel memory blocks containing the parameters.
-.It Va krp_hid
-Identifier specifying which low-level driver is being used.
-.It Va krp_callback
-Callback called on completion of a keying operation.
+.Bl -tag -offset indent -width CRYPTO_AES_NIST_GCM_16 -compact
+.It Dv CRYPTO_AES_CCM_16
+.It Dv CRYPTO_AES_NIST_GCM_16
.El
-.Sh DRIVER-SIDE API
-The
-.Fn crypto_get_driverid ,
-.Fn crypto_get_driver_session ,
-.Fn crypto_register ,
-.Fn crypto_kregister ,
-.Fn crypto_unregister ,
-.Fn crypto_unblock ,
-and
-.Fn crypto_done
-routines are used by drivers that provide support for cryptographic
-primitives to register and unregister with the kernel crypto services
-framework.
-.Pp
-Drivers must first use the
-.Fn crypto_get_driverid
-function to acquire a driver identifier, specifying the
-.Fa flags
-as an argument.
-One of
-.Dv CRYPTOCAP_F_SOFTWARE
-or
-.Dv CRYPTOCAP_F_HARDWARE
-must be specified.
-The
-.Dv CRYPTOCAP_F_SYNC
-may also be specified, and should be specified if the driver does all of
-it's operations synchronously.
-Drivers must pass the size of their session structure as the second argument.
-An appropriately sized memory will be allocated by the framework, zeroed, and
-passed to the driver's
-.Fn newsession
-method.
-.Pp
-For each algorithm the driver supports, it must then call
-.Fn crypto_register .
-The first two arguments are the driver and algorithm identifiers.
-The next two arguments specify the largest possible operator length (in bits,
-important for public key operations) and flags for this algorithm.
-.Pp
-.Fn crypto_unregister
-is called by drivers that wish to withdraw support for an algorithm.
-The two arguments are the driver and algorithm identifiers, respectively.
-Typically, drivers for
-PCMCIA
-crypto cards that are being ejected will invoke this routine for all
-algorithms supported by the card.
-.Fn crypto_unregister_all
-will unregister all algorithms registered by a driver
-and the driver will be disabled (no new sessions will be allocated on
-that driver, and any existing sessions will be migrated to other
-drivers).
-The same will be done if all algorithms associated with a driver are
-unregistered one by one.
-After a call to
-.Fn crypto_unregister_all
-there will be no threads in either the newsession or freesession function
-of the driver.
.Pp
-The calling convention for the driver-supplied routines are:
+The following compression algorithms are supported:
.Pp
-.Bl -item -compact
-.It
-.Ft int
-.Fn \*[lp]*newsession\*[rp] "device_t" "crypto_session_t" "struct cryptoini *" ;
-.It
-.Ft void
-.Fn \*[lp]*freesession\*[rp] "device_t" "crypto_session_t" ;
-.It
-.Ft int
-.Fn \*[lp]*process\*[rp] "device_t" "struct cryptop *" "int" ;
-.It
-.Ft int
-.Fn \*[lp]*kprocess\*[rp] "device_t" "struct cryptkop *" "int" ;
+.Bl -tag -offset indent -width CRYPTO_DEFLATE_COMP -compact
+.It Dv CRYPTO_DEFLATE_COMP
.El
-.Pp
-On invocation, the first argument to
-all routines is the
-.Fa device_t
-that was provided to
-.Fn crypto_get_driverid .
-The second argument to
-.Fn newsession
-is the opaque session handle for the new session.
-The third argument is identical to that of
-.Fn crypto_newsession .
-.Pp
-Drivers obtain a pointer to their session memory by invoking
-.Fn crypto_get_driver_session
-on the opaque
-.Vt crypto_session_t
-handle.
-.Pp
-The
-.Fn freesession
-routine takes as arguments the opaque data value and the session handle.
-It should clear any context associated with the session (clear hardware
-registers, memory, etc.).
-If no resources need to be released other than the contents of session memory,
-the method is optional.
-The
-.Nm
-framework will zero and release the allocated session memory (after running the
-.Fn freesession
-method, if one exists).
-.Pp
-The
-.Fn process
-routine is invoked with a request to perform crypto processing.
-This routine must not block or sleep, but should queue the request and return
-immediately or process the request to completion.
-In case of an unrecoverable error, the error indication must be placed in the
-.Va crp_etype
-field of the
-.Vt cryptop
-structure.
-When the request is completed, or an error is detected, the
-.Fn process
-routine must invoke
-.Fn crypto_done .
-Session migration may be performed, as mentioned previously.
-.Pp
-In case of a temporary resource exhaustion, the
-.Fn process
-routine may return
-.Er ERESTART
-in which case the crypto services will requeue the request, mark the driver
-as
-.Dq blocked ,
-and stop submitting requests for processing.
-The driver is then responsible for notifying the crypto services
-when it is again able to process requests through the
-.Fn crypto_unblock
-routine.
-This simple flow control mechanism should only be used for short-lived
-resource exhaustion as it causes operations to be queued in the crypto
-layer.
-Doing so is preferable to returning an error in such cases as
-it can cause network protocols to degrade performance by treating the
-failure much like a lost packet.
-.Pp
-The
-.Fn kprocess
-routine is invoked with a request to perform crypto key processing.
-This routine must not block, but should queue the request and return
-immediately.
-Upon processing the request, the callback routine should be invoked.
-In case of an unrecoverable error, the error indication must be placed in the
-.Va krp_status
-field of the
-.Vt cryptkop
-structure.
-When the request is completed, or an error is detected, the
-.Fn kprocess
-routine should invoked
-.Fn crypto_kdone .
-.Sh RETURN VALUES
-.Fn crypto_register ,
-.Fn crypto_kregister ,
-.Fn crypto_unregister ,
-.Fn crypto_newsession ,
-.Fn crypto_freesession ,
-and
-.Fn crypto_unblock
-return 0 on success, or an error code on failure.
-.Fn crypto_get_driverid
-returns a non-negative value on error, and \-1 on failure.
-.Fn crypto_getreq
-returns a pointer to a
-.Vt cryptop
-structure and
-.Dv NULL
-on failure.
-.Fn crypto_dispatch
-returns
-.Er EINVAL
-if its argument or the callback function was
-.Dv NULL ,
-and 0 otherwise.
-The callback is provided with an error code in case of failure, in the
-.Va crp_etype
-field.
.Sh FILES
.Bl -tag -width ".Pa sys/opencrypto/crypto.c"
.It Pa sys/opencrypto/crypto.c
@@ -735,7 +165,10 @@ most of the framework code
.Xr crypto 4 ,
.Xr ipsec 4 ,
.Xr crypto 7 ,
-.Xr malloc 9 ,
+.Xr crypto_asym 9 ,
+.Xr crypto_driver 9 ,
+.Xr crypto_request 9 ,
+.Xr crypto_session 9 ,
.Xr sleep 9
.Sh HISTORY
The cryptographic framework first appeared in
@@ -743,14 +176,6 @@ The cryptographic framework first appeared in
and was written by
.An Angelos D. Keromytis Aq Mt angelos@openbsd.org .
.Sh BUGS
-The framework currently assumes that all the algorithms in a
-.Fn crypto_newsession
-operation must be available by the same driver.
-If that is not the case, session initialization will fail.
-.Pp
-The framework also needs a mechanism for determining which driver is
+The framework needs a mechanism for determining which driver is
best for a specific set of algorithms associated with a session.
Some type of benchmarking is in order here.
-.Pp
-Multiple instances of the same algorithm in the same session are not
-supported.
diff --git a/share/man/man9/crypto_asym.9 b/share/man/man9/crypto_asym.9
new file mode 100644
index 000000000000..c21a72f8d1c4
--- /dev/null
+++ b/share/man/man9/crypto_asym.9
@@ -0,0 +1,178 @@
+.\" Copyright (c) 2020, Chelsio Inc
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions are met:
+.\"
+.\" 1. Redistributions of source code must retain the above copyright notice,
+.\" this list of conditions and the following disclaimer.
+.\"
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\" notice, this list of conditions and the following disclaimer in the
+.\" documentation and/or other materials provided with the distribution.
+.\"
+.\" 3. Neither the name of the Chelsio Inc nor the names of its
+.\" contributors may be used to endorse or promote products derived from
+.\" this software without specific prior written permission.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+.\" POSSIBILITY OF SUCH DAMAGE.
+.\"
+.\" * Other names and brands may be claimed as the property of others.
+.\"
+.\" $FreeBSD$
+.\"
+.Dd March 27, 2020
+.Dt CRYPTO_ASYM 9
+.Os
+.Sh NAME
+.Nm crypto_asym
+.Nd asymmetric cryptographic operations
+.Sh SYNOPSIS
+.In opencrypto/cryptodev.h
+.Ft int
+.Fn crypto_kdispatch "struct cryptkop *krp"
+.Ft void
+.Fn crypto_kdone "struct cryptkop *krp"
+.Ft int
+.Fn crypto_kregister "uint32_t driverid" "int kalg" "uint32_t flags"
+.Ft int
+.Fn CRYPTODEV_KPROCESS "device_t dev" "struct cryptop *krp" "int flags"
+.Sh DESCRIPTION
+The in-kernel cryptographic kernel framework supports asymmetric
+requests (keying requests) in addition to symmetric operations.
+There are currently no in-kernel users of these requests,
+but applications can make requests of hardware drivers via the
+.Pa /dev/crypto
+device .
+.Pp
+Some APIs are shared with the framework's symmetric request support.
+This manual describes the APIs and data structures unique to
+asymmetric requests.
+.Pp
+.Ss Request Objects
+A request is described by a
+.Vt struct cryptkop
+containing the following fields:
+.Bl -tag -width "krp_callback"
+.It Fa krp_op
+Operation to perform.
+Available operations include
+.Dv CRK_MOD_EXP ,
+.Dv CRK_MOD_EXP_CRT ,
+.Dv CRK_DSA_SIGN ,
+.Dv CRK_DSA_VERIFY ,
+and
+.Dv CRK_DH_COMPUTE_KEY .
+.It Fa krp_status
+Error status.
+Either zero on success,
+or an error if an operation fails.
+Set by drivers prior to completing a request via
+.Fn crypto_kdone .
+.It Fa krp_iparams
+Count of input parameters.
+.It Fa krp_oparams
+Count of output parameters.
+.It Fa krp_crid
+Requested device.
+.It Fa krp_hid
+Device used to complete the request.
+.It Fa krp_param
+Array of parameters.
+The array contains the input parameters first followed by the output
+parameters.
+Each parameter is stored as a bignum.
+Each bignum is described by a
+.Vt struct crparam
+containing the following fields:
+.Bl -tag -width "crp_nbits"
+.It Fa crp_p
+Pointer to array of packed bytes.
+.It Fa crp_nbits
+Size of bignum in bits.
+.El
+.It Fa krp_callback
+Callback function.
+This must point to a callback function of type
+.Vt void (*)(struct cryptkop *) .
+The callback function should inspect
+.Fa krp_status
+to determine the status of the completed operation.
+.El
+.Pp
+New requests should be initialized to zero before setting fields to
+appropriate values.
+Once the request has been populated,
+it should be passed to
+.Fn crypto_kdispatch .
+.Pp
+.Fn crypto_kdispatch
+will choose a device driver to perform the operation described by
+.Fa krp
+and invoke that driver's
+.Fn CRYPTO_KPROCESS
+method.
+.Ss Driver API
+Drivers register support for asymmetric operations by calling
+.Fn crypto_kregister
+for each supported algorithm.
+.Fa driverid
+should be the value returned by an earlier call to
+.Fn crypto_get_driverid .
+.Fa kalg
+should list one of the operations that can be set in
+.Fa krp_op .
+.Fa flags
+is a bitmask of zero or more of the following values:
+.Bl -tag -width "CRYPTO_ALG_FLAG_RNG_ENABLE"
+.It Dv CRYPTO_ALG_FLAG_RNG_ENABLE
+Device has a hardware RNG for DH/DSA.
+.It Dv CRYPTO_ALG_FLAG_DSA_SHA
+Device can compute a SHA digest of a message.
+.El
+.Pp
+Drivers unregister with the framework via
+.Fn crypto_unregister_all .
+.Pp
+Similar to
+.Fn CRYPTO_PROCESS ,
+.Fn CRYPTO_KPROCESS
+should complete the request or schedule it for asynchronous
+completion.
+If this method is not able to complete a request due to insufficient
+resources,
+it can defer the request (and future asymmetric requests) by returning
+.Dv ERESTART .
+Once resources are available,
+the driver should invoke
+.Fn crypto_unblock
+with
+.Dv CRYPTO_ASYMQ
+to resume processing of asymmetric requests.
+.Pp
+Once a request is completed,
+the driver should set
+.Fa krp_status
+and then call
+.Fn crypto_kdone .
+.Sh RETURN VALUES
+.Fn crypto_kdispatch ,
+.Fn crypto_kregister ,
+and
+.Fn CRYPTODEV_KPROCESS
+return zero on success or an error on failure.
+.Sh SEE ALSO
+.Xr crypto 7 ,
+.Xr crypto 9 ,
+.Xr crypto_driver 9 ,
+.Xr crypto_request 9 ,
+.Xr crypto_session 9
diff --git a/share/man/man9/crypto_driver.9 b/share/man/man9/crypto_driver.9
new file mode 100644
index 000000000000..99260062020f
--- /dev/null
+++ b/share/man/man9/crypto_driver.9
@@ -0,0 +1,392 @@
+.\" Copyright (c) 2020, Chelsio Inc
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions are met:
+.\"
+.\" 1. Redistributions of source code must retain the above copyright notice,
+.\" this list of conditions and the following disclaimer.
+.\"
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\" notice, this list of conditions and the following disclaimer in the
+.\" documentation and/or other materials provided with the distribution.
+.\"
+.\" 3. Neither the name of the Chelsio Inc nor the names of its
+.\" contributors may be used to endorse or promote products derived from
+.\" this software without specific prior written permission.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+.\" POSSIBILITY OF SUCH DAMAGE.
+.\"
+.\" * Other names and brands may be claimed as the property of others.
+.\"
+.\" $FreeBSD$
+.\"
+.Dd March 27, 2020
+.Dt CRYPTO_DRIVER 9
+.Os
+.Sh NAME
+.Nm crypto_driver
+.Nd interface for symmetric cryptographic drivers
+.Sh SYNOPSIS
+.In opencrypto/cryptodev.h
+.Ft int
+.Fo crypto_apply
+.Fa "struct cryptop *crp"
+.Fa "int off"
+.Fa "int len"
+.Fa "int (*f)(void *, void *, u_int)"
+.Fa "void *arg"
+.Fc
+.Ft void *
+.Fo crypto_contiguous_subsegment
+.Fa "struct cryptop *crp"
+.Fa "size_t skip"
+.Fa "size_t len"
+.Fc
+.Ft void
+.Fn crypto_copyback "struct cryptop *crp" "int off" "int size" "const void *src"
+.Ft void
+.Fn crypto_copydata "struct cryptop *crp" "int off" "int size" "void *dst"
+.Ft void
+.Fn crypto_done "struct cryptop *crp"
+.Ft int32_t
+.Fn crypto_get_driverid "device_t dev" "size_t session_size" "int flags"
+.Ft void *
+.Fn crypto_get_driver_session "crypto_session_t crypto_session"
+.Ft int
+.Fn crypto_unblock "uint32_t driverid" "int what"
+.Ft int
+.Fn crypto_unregister_all "uint32_t driverid"
+.Ft int
+.Fn CRYPTODEV_FREESESSION "device_t dev" "crypto_session_t crypto_session"
+.Ft int
+.Fo CRYPTODEV_NEWSESSION
+.Fa "device_t dev"
+.Fa "crypto_session_t crypto_session"
+.Fa "const struct crypto_session_params *csp"
+.Fc
+.Ft int
+.Fo CRYPTODEV_PROBESESSION
+.Fa "device_t dev"
+.Fa "const struct crypto_session_params *csp"
+.Fc
+.Ft int
+.Fn CRYPTODEV_PROCESS "device_t dev" "struct cryptop *crp" "int flags"
+.Ft void
+.Fo hmac_init_ipad
+.Fa "struct auth_hash *axf"
+.Fa "const char *key"
+.Fa "int klen"
+.Fa "void *auth_ctx"
+.Fc
+.Ft void
+.Fo hmac_init_opad
+.Fa "struct auth_hash *axf"
+.Fa "const char *key"
+.Fa "int klen"
+.Fa "void *auth_ctx"
+.Fc
+.Sh DESCRIPTION
+Symmetric cryptographic drivers process cryptographic requests
+submitted to sessions associated with the driver.
+.Pp
+Cryptographic drivers call
+.Fn crypto_get_driverid
+to register with the cryptographic framework.
+.Fa dev
+is the device used to service requests.
+The
+.Fn CRYPTODEV
+methods are defined in the method table for the device driver attached to
+.Fa dev .
+.Fa session_size
+specifies the size of a driver-specific per-session structure allocated by
+the cryptographic framework.
+.Fa flags
+is a bitmask of properties about the driver.
+Exactly one of
+.Dv CRYPTOCAP_F_SOFTWARE
+or
+.Dv CRYPTOCAP_F_HARDWARE
+must be specified.
+.Dv CRYPTOCAP_F_SOFTWARE
+should be used for drivers which process requests using host CPUs.
+.Dv CRYPTOCAP_F_HARDWARE
+should be used for drivers which process requests on separate co-processors.
+.Dv CRYPTOCAP_F_SYNC
+should be set for drivers which process requests synchronously in
+.Fn CRYPTODEV_PROCESS .
+.Fn crypto_get_driverid
+returns an opaque driver id.
+.Pp
+.Fn crypto_unregister_all
+unregisters a driver from the cryptographic framework.
+If there are any pending operations or open sessions,
+this function will sleep.
+.Fa driverid
+is the value returned by an earlier call to
+.Fn crypto_get_driverid .
+.Pp
+When a new session is created by
+.Fn crypto_newsession ,
+.Fn CRYPTODEV_PROBESESSION
+is invoked by the cryptographic framework on each active driver to
+determine the best driver to use for the session.
+This method should inspect the session parameters in
+.Fa csp .
+If a driver does not support requests described by
+.Fa csp ,
+this method should return an error value.
+If the driver does support requests described by
+.Fa csp ,
+it should return a negative value.
+The framework prefers drivers with the largest negative value,
+similar to
+.Xr DEVICE_PROBE 9 .
+The following values are defined for non-error return values from this
+method:
+.Bl -tag -width "CRYPTODEV_PROBE_ACCEL_SOFTWARE"
+.It Dv CRYPTODEV_PROBE_HARDWARE
+The driver processes requests via a co-processor.
+.It Dv CRYPTODEV_PROBE_ACCEL_SOFTWARE
+The driver processes requests on the host CPU using optimized instructions
+such as AES-NI.
+.It Dv CRYPTODEV_PROBE_SOFTWARE
+The driver processes requests on the host CPU.
+.El
+.Pp
+This method should not sleep.
+.Pp
+Once the framework has chosen a driver for a session,
+the framework invokes the
+.Fn CRYPTODEV_NEWSESSION
+method to initialize driver-specific session state.
+Prior to calling this method,
+the framework allocates a per-session driver-specific data structure.
+This structure is initialized with zeroes,
+and its size is set by the
+.Fa session_size
+passed to
+.Fn crypto_get_driverid .
+This method can retrieve a pointer to this data structure by passing
+.Fa crypto_session
+to
+.Fn crypto_get_driver_session .
+Session parameters are described in
+.Fa csp .
+.Pp
+This method should not sleep.
+.Pp
+.Fn CRYPTODEV_FREESESSION
+is invoked to release any driver-specific state when a session is
+destroyed.
+The per-session driver-specific data structure is explicitly zeroed
+and freed by the framework after this method returns.
+If a driver requires no additional tear-down steps, it can leave
+this method undefined.
+.Pp
+This method should not sleep.
+.Pp
+.Fn CRYPTODEV_PROCESS
+is invoked for each request submitted to an active session.
+This method can either complete a request synchronously or
+schedule it to be completed asynchronously,
+but it must not sleep.
+.Pp
+If this method is not able to complete a request due to insufficient
+resources such as a full command queue,
+it can defer the request by returning
+.Dv ERESTART .
+The request will be queued by the framework and retried once the
+driver releases pending requests via
+.Fn crypto_unblock .
+Any requests submitted to sessions belonging to the driver will also
+be queued until
+.Fn crypto_unblock
+is called.
+.Pp
+If a driver encounters errors while processing a request,
+it should report them via the
+.Fa crp_etype
+field of
+.Fa crp
+rather than returning an error directly.
+.Pp
+.Fa flags
+may be set to
+.Dv CRYPTO_HINT_MORE
+if there are additional requests queued for this driver.
+The driver can use this as a hint to batch completion interrupts.
+Note that these additional requests may be from different sessions.
+.Pp
+.Fn crypto_get_driver_session
+returns a pointer to the driver-specific per-session data structure
+for the session
+.Fa crypto_session .
+This function can be used in the
+.Fn CRYPTODEV_NEWSESSION ,
+.Fn CRYPTODEV_PROCESS ,
+and
+.Fn CRYPTODEV_FREESESSION
+callbacks.
+.Pp
+.Fn crypto_copydata
+copies
+.Fa size
+bytes out of the data buffer for
+.Fa crp
+into a local buffer pointed to by
+.Fa dst .
+The bytes are read starting at an offset of
+.Fa off
+bytes in the request's data buffer.
+.Pp
+.Fn crypto_copyback
+copies
+.Fa size
+bytes from the local buffer pointed to by
+.Fa src
+into the data buffer for
+.Fa crp .
+The bytes are written starting at an offset of
+.Fa off
+bytes in the request's data buffer.
+.Pp
+A driver calls
+.Fn crypto_done
+to mark the request
+.Fa crp
+as completed.
+Any errors should be set in
+.Fa crp_etype
+prior to calling this function.
+.Pp
+If a driver defers a request by returning
+.Dv ERESTART
+from
+.Dv CRYPTO_PROCESS ,
+the framework will queue all requests for the driver until the driver calls
+.Fn crypto_unblock
+to indicate that the temporary resource shortage has been relieved.
+For example,
+if a driver returns
+.Dv ERESTART
+due to a full command ring,
+it would invoke
+.Fn crypto_unblock
+from a command completion interrupt that makes a command ring entry available.
+.Fa driverid
+is the value returned by
+.Fn crypto_get_driverid .
+.Fa what
+indicates which types of requests the driver is able to handle again:
+.Bl -tag -width "CRYPTO_ASYMQ"
+.It Dv CRYPTO_SYMQ
+indicates that the driver is able to handle symmetric requests passed to
+.Fn CRYPTODEV_PROCESS .
+.It Dv CRYPTO_ASYMQ
+indicates that the driver is able to handle asymmetric requests passed to
+.Fn CRYPTODEV_KPROCESS .
+.El
+.Pp
+.Fn crypto_apply
+is a helper routine that can be used to invoke a caller-supplied function
+to a region of the data buffer for
+.Fa crp .
+The function
+.Fa f
+is called one or more times.
+For each invocation,
+the first argument to
+.Fa f
+is the value of
+.Fa arg passed to
+.Fn crypto_apply .
+The second and third arguments to
+.Fa f
+are a pointer and length to a segment of the buffer mapped into the kernel.
+The function is called enough times to cover the
+.Fa len
+bytes of the data buffer which starts at an offset
+.Fa off .
+If any invocation of
+.Fa f
+returns a non-zero value,
+.Fn crypto_apply
+immediately returns that value without invoking
+.Fa f
+on any remaining segments of the region,
+otherwise
+.Fn crypto_apply
+returns the value from the final call to
+.Fa f .
+.Pp
+.Fn crypto_contiguous_subsegment
+attempts to locate a single, virtually-contiguous segment of the data buffer
+for
+.Fa crp .
+The segment must be
+.Fa len
+bytes long and start at an offset of
+.Fa skip
+bytes.
+If a segment is found,
+a pointer to the start of the segment is returned.
+Otherwise,
+.Dv NULL
+is returned.
+.Pp
+.Fn hmac_init_ipad
+prepares an authentication context to generate the inner hash of an HMAC.
+.Fa axf
+is a software implementation of an authentication algorithm such as the
+value returned by
+.Fn crypto_auth_hash .
+.Fa key
+is a pointer to a HMAC key of
+.Fa klen
+bytes.
+.Fa auth_ctx
+points to a valid authentication context for the desired algorithm.
+The function initializes the context with the supplied key.
+.Pp
+.Fn hmac_init_opad
+is similar to
+.Fn hmac_init_ipad
+except that it prepares an authentication context to generate the
+outer hash of an HMAC.
+.Sh RETURN VALUES
+.Fn crypto_apply
+returns the return value from the caller-supplied callback function.
+.Pp
+.Fn crypto_contiguous_subsegment
+returns a pointer to a contiguous segment or
+.Dv NULL .
+.Pp
+.Fn crypto_get_driverid
+returns a driver identifier on success or -1 on error.
+.Pp
+.Fn crypto_unblock ,
+.Fn crypto_unregister_all ,
+.Fn CRYPTODEV_FREESESSION ,
+.Fn CRYPTODEV_NEWSESSION ,
+and
+.Fn CRYPTODEV_PROCESS
+return zero on success or an error on failure.
+.Pp
+.Fn CRYPTODEV_PROBESESSION
+returns a negative value on success or an error on failure.
+.Sh SEE ALSO
+.Xr crypto 7 ,
+.Xr crypto 9 ,
+.Xr crypto_request 9 ,
+.Xr crypto_session 9
diff --git a/share/man/man9/crypto_request.9 b/share/man/man9/crypto_request.9
new file mode 100644
index 000000000000..4e6dfddfda3f
--- /dev/null
+++ b/share/man/man9/crypto_request.9
@@ -0,0 +1,419 @@
+.\" Copyright (c) 2020, Chelsio Inc
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions are met:
+.\"
+.\" 1. Redistributions of source code must retain the above copyright notice,
+.\" this list of conditions and the following disclaimer.
+.\"
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\" notice, this list of conditions and the following disclaimer in the
+.\" documentation and/or other materials provided with the distribution.
+.\"
+.\" 3. Neither the name of the Chelsio Inc nor the names of its
+.\" contributors may be used to endorse or promote products derived from
+.\" this software without specific prior written permission.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+.\" POSSIBILITY OF SUCH DAMAGE.
+.\"
+.\" * Other names and brands may be claimed as the property of others.
+.\"
+.\" $FreeBSD$
+.\"
+.Dd March 27, 2020
+.Dt CRYPTO_REQUEST 9
+.Os
+.Sh NAME
+.Nm crypto_request
+.Nd symmetric cryptographic operations
+.Sh SYNOPSIS
+.In opencrypto/cryptodev.h
+.Ft int
+.Fn crypto_dispatch "struct cryptop *crp"
+.Ft void
+.Fn crypto_freereq "struct cryptop *crp"
+.Ft "struct cryptop *"
+.Fn crypto_getreq "crypto_session_t cses" "int how"
+.Sh DESCRIPTION
+Each symmetric cryptographic operation in the kernel is described by
+an instance of
+.Vt struct cryptop
+and is associated with an active session.
+.Pp
+New requests are allocated by
+.Fn crypto_getreq .
+.Fa cses
+is a reference to an active session.
+.Fa how
+is passed to
+.Xr malloc 9
+and should be set to either
+.Dv M_NOWAIT
+or
+.Dv M_WAITOK .
+The caller should then set fields in the returned structure to describe
+request-specific parameters.
+Unused fields should be left as-is.
+.Pp
+.Fn crypto_dispatch
+passes a crypto request to the driver attached to the request's session.
+If there are errors in the request's fields, this function may return
+an error to the caller.
+If errors are encountered while servicing the request, they will instead
+be reported to the request's callback function
+.Pq Fa crp_callback
+via
+.Fa crp_etype .
+.Pp
+Note that a request's callback function may be invoked before
+.Fn crypto_dispatch
+returns.
+.Pp
+Once a request has signaled completion by invoking its callback function,
+it should be feed via
+.Fn crypto_freereq .
+.Pp
+Cryptographic operations include several fields to describe the request.
+.Ss Buffer Types
+Requests are associated with a single data buffer that is modified in place.
+The type of the data buffer and the buffer itself are described by the
+following fields:
+.Bl -tag -width crp_buf_type
+.It Fa crp_buf_type
+The type of the data buffer.
+The following types are supported:
+.Bl -tag -width CRYPTO_BUF_CONTIG
+.It Dv CRYPTO_BUF_CONTIG
+An array of bytes mapped into the kernel's address space.
+.It Dv CRYPTO_BUF_UIO
+A scatter/gather list of kernel buffers as described in
+.Xr uio 9 .
+.It Dv CRYPTO_BUF_MBUF
+A network memory buffer as described in
+.Xr mbuf 9 .
+.El
+.It Fa crp_buf
+A pointer to the start of a
+.Dv CRYPTO_BUF_CONTIG
+data buffer.
+.It Fa crp_ilen
+The length of a
+.Dv CRYPTO_BUF_CONTIG
+data buffer
+.It Fa crp_mbuf
+A pointer to a
+.Vt struct mbuf
+for
+.Dv CRYPTO_BUF_MBUF .
+.It Fa crp_uio
+A pointer to a
+.Vt struct uio
+for
+.Dv CRYPTO_BUF_UIO .
+.It Fa crp_olen
+Used with compression and decompression requests to describe the updated
+length of the payload region in the data buffer.
+.Pp
+If a compression request increases the size of the payload,
+then the data buffer is unmodified, the request completes successfully,
+and
+.Fa crp_olen
+is set to the size the compressed data would have used.
+Callers can compare this to the payload region length to determine if
+the compressed data was discarded.
+.El
+.Ss Request Regions
+Each request describes one or more regions in the data buffer using.
+Each region is described by an offset relative to the start of the
+data buffer and a length.
+The length of some regions is the same for all requests belonging to
+a session.
+Those lengths are set in the session parameters of the associated
+session.
+All requests must define a payload region.
+Other regions are only required for specific session modes.
+The following regions are defined:
+.Bl -column "Payload" "crp_payload_start" "crp_payload_length"
+.It Sy Region Ta Sy Start Ta Sy Length Ta Sy Description
+.It AAD Ta Fa crp_aad_start Ta Fa crp_aad_length Ta
+Additional Authenticated Data
+.It IV Ta Fa crp_iv_start Ta Fa csp_ivlen Ta
+Embedded IV or nonce
+.It Payload Ta Fa crp_payload_start Ta Fa crp_payload_length Ta
+Data to encrypt, decrypt, compress, or decompress
+.It Digest Ta Fa crp_digest_start Ta Fa csp_auth_mlen Ta
+Authentication digest, hash, or tag
+.El
+.Pp
+Requests are permitted to operate on only a subset of the data buffer.
+For example,
+requests from IPsec operate on network packets that include headers not
+used as either additional authentication data (AAD) or payload data.
+.Ss Request Operations
+All requests must specify the type of operation to perform in
+.Fa crp_op .
+Available operations depend on the session's mode.
+.Pp
+Compression requests support the following operations:
+.Bl -tag -width CRYPTO_OP_DECOMPRESS
+.It Dv CRYPTO_OP_COMPRESS
+Compress the data in the payload region of the data buffer.
+.It Dv CRYPTO_OP_DECOMPRESS
+Decompress the data in the payload region of the data buffer.
+.El
+.Pp
+Cipher requests support the following operations:
+.Bl -tag -width CRYPTO_OP_DECRYPT
+.It Dv CRYPTO_OP_ENCRYPT
+Encrypt the data in the payload region of the data buffer.
+.It Dv CRYPTO_OP_DECRYPT
+Decrypt the data in the payload region of the data buffer.
+.El
+.Pp
+Digest requests support the following operations:
+.Bl -tag -width CRYPTO_OP_COMPUTE_DIGEST
+.It Dv CRYPTO_OP_COMPUTE_DIGEST
+Calculate a digest over the payload region of the data buffer
+and store the result in the digest region.
+.It Dv CRYPTO_OP_VERIFY_DIGEST
+Calculate a digest over the payload region of the data buffer.
+Compare the calculated digest to the existing digest from the digest region.
+If the digests match,
+complete the request successfully.
+If the digests do not match,
+fail the request with
+.Er EBADMSG .
+.El
+.Pp
+AEAD and Encrypt-then-Authenticate requests support the following
+operations:
+.Bl -tag -width CRYPTO_OP
+.It Dv CRYPTO_OP_ENCRYPT | Dv CRYPTO_OP_COMPUTE_DIGEST
+Encrypt the data in the payload region of the data buffer.
+Calculate a digest over the AAD and payload regions and store the
+result in the data buffer.
+.It Dv CRYPTO_OP_DECRYPT | Dv CRYPTO_OP_VERIFY_DIGEST
+Calculate a digest over the AAD and payload regions of the data buffer.
+Compare the calculated digest to the existing digest from the digest region.
+If the digests match,
+decrypt the payload region.
+If the digests do not match,
+fail the request with
+.Er EBADMSG .
+.El
+.Ss Request IV and/or Nonce
+Some cryptographic operations require an IV or nonce as an input.
+An IV may be stored either in the IV region of the data buffer or in
+.Fa crp_iv .
+By default,
+the IV is assumed to be stored in the IV region.
+If the IV is stored in
+.Fa crp_iv ,
+.Dv CRYPTO_F_IV_SEPARATE
+should be set in
+.Fa crp_flags
+and
+.Fa crp_digest_start
+should be left as zero.
+.Pp
+An encryption request using an IV stored in the IV region may set
+.Dv CRYPTO_F_IV_GENERATE
+in
+.Fa crp_flags
+to request that the driver generate a random IV.
+Note that
+.Dv CRYPTO_F_IV_GENERATE
+cannot be used with decryption operations or in combination with
+.Dv CRYPTO_F_IV_SEPARATE .
+.Pp
+Requests that store part, but not all, of the IV in the data buffer should
+store the partial IV in the data buffer and pass the full IV separately in
+.Fa crp_iv .
+.Ss Request and Callback Scheduling
+The crypto framework provides multiple methods of scheduling the dispatch
+of requests to drivers along with the processing of driver callbacks.
+Requests use flags in
+.Fa crp_flags
+to select the desired scheduling methods.
+.Pp
+.Fn crypto_dispatch
+can pass the request to the session's driver via three different methods:
+.Bl -enum
+.It
+The request is queued to a taskqueue backed by a pool of worker threads.
+By default the pool is sized to provide one thread for each CPU.
+Worker threads dequeue requests and pass them to the driver
+asynchronously.
+.It
+The request is passed to the driver synchronously in the context of the
+thread invoking
+.Fn crypto_dispatch .
+.It
+The request is queued to a queue of pending requests.
+A single worker thread dequeues requests and passes them to the driver
+asynchronously.
+.El
+.Pp
+To select the first method (taskqueue backed by multiple threads),
+requests should set
+.Dv CRYPTO_F_ASYNC .
+To always use the third method (queue to single worker thread),
+requests should set
+.Dv CRYPTO_F_BATCH .
+If both flags are set,
+.Dv CRYPTO_F_ASYNC
+takes precedence.
+If neither flag is set,
+.Fn crypto_dispatch
+will first attempt the second method (invoke driver synchronously).
+If the driver is blocked,
+the request will be queued using the third method.
+One caveat is that the first method is only used for requests using software
+drivers which use host CPUs to process requests.
+Requests whose session is associated with a hardware driver will ignore
+.Dv CRYPTO_F_ASYNC
+and only use
+.Dv CRYPTO_F_BATCH
+to determine how requests should be scheduled.
+.Pp
+In addition to bypassing synchronous dispatch in
+.Fn crypto_dispatch ,
+.Dv CRYPTO_F_BATCH
+requests additional changes aimed at optimizing batches of requests to
+the same driver.
+When the worker thread processes a request with
+.Dv CRYPTO_F_BATCH ,
+it will search the pending request queue for any other requests for the same
+driver,
+including requests from different sessions.
+If any other requests are present,
+.Dv CRYPTO_HINT_MORE
+is passed to the driver's process method.
+Drivers may use this to batch completion interrupts.
+.Pp
+Callback function scheduling is simpler than request scheduling.
+Callbacks can either be invoked synchronously from
+.Fn crypto_done ,
+or they can be queued to a pool of worker threads.
+This pool of worker threads is also sized to provide one worker thread
+for each CPU by default.
+Note that a callback function invoked synchronously from
+.Fn crypto_done
+must follow the same restrictions placed on threaded interrupt handlers.
+.Pp
+By default,
+callbacks are invoked asynchronously by a worker thread.
+If
+.Dv CRYPTO_F_CBIMM
+is set,
+the callback is always invoked synchronously from
+.Fn crypto_done .
+If
+.Dv CRYPTO_F_CBIFSYNC
+is set,
+the callback is invoked synchronously if the request was processed by a
+software driver or asynchronously if the request was processed by a
+hardware driver.
+.Pp
+If a request was scheduled to the taskqueue via
+.Dv CRYPTO_F_ASYNC ,
+callbacks are always invoked asynchronously ignoring
+.Dv CRYPTO_F_CBIMM
+and
+.Dv CRYPTO_F_CBIFSYNC .
+In this case,
+.Dv CRYPTO_F_ASYNC_KEEPORDER
+may be set to ensure that callbacks for requests on a given session are
+invoked in the same order that requests were queued to the session via
+.Fn crypto_dispatch .
+This flag is used by IPsec to ensure that decrypted network packets are
+passed up the network stack in roughly the same order they were received.
+.Pp
+.Ss Other Request Fields
+In addition to the fields and flags enumerated above,
+.Vt struct cryptop
+includes the following:
+.Bl -tag -width crp_payload_length
+.It Fa crp_session
+A reference to the active session.
+This is set when the request is created by
+.Fn crypto_getreq
+and should not be modified.
+Drivers can use this to fetch driver-specific session state or
+session parameters.
+.It Fa crp_etype
+Error status.
+Either zero on success, or an error if a request fails.
+Set by drivers prior to completing a request via
+.Fn crypto_done .
+.It Fa crp_flags
+A bitmask of flags.
+The following flags are available in addition to flags discussed previously:
+.Bl -tag -width CRYPTO_F_DONE
+.It Dv CRYPTO_F_DONE
+Set by
+.Fa crypto_done
+before calling
+.Fa crp_callback .
+This flag is not very useful and will likely be removed in the future.
+It can only be safely checked from the callback routine at which point
+it is always set.
+.El
+.It Fa crp_cipher_key
+Pointer to a request-specific encryption key.
+If this value is not set,
+the request uses the session encryption key.
+.It Fa crp_auth_key
+Pointer to a request-specific authentication key.
+If this value is not set,
+the request uses the session authentication key.
+.It Fa crp_opaque
+An opaque pointer.
+This pointer permits users of the cryptographic framework to store
+information about a request to be used in the callback.
+.It Fa crp_callback
+Callback function.
+This must point to a callback function of type
+.Vt void (*)(struct cryptop *) .
+The callback function should inspect
+.Fa crp_etype
+to determine the status of the completed operation.
+It should also arrange for the request to be freed via
+.Fn crypto_freereq .
+.El
+.Sh RETURN VALUES
+.Fn crypto_dispatch
+returns an error if the request contained invalid fields,
+or zero if the request was valid.
+.Fn crypto_getreq
+returns a pointer to a new request structure on success,
+or
+.Dv NULL
+on failure.
+.Dv NULL
+can only be returned if
+.Dv M_NOWAIT
+was passed in
+.Fa how .
+.Sh SEE ALSO
+.Xr ipsec 4 ,
+.Xr crypto 7 ,
+.Xr crypto 9 ,
+.Xr crypto_session 9 ,
+.Xr mbuf 9
+.Xr uio 9
+.Sh BUGS
+Not all drivers properly handle mixing session and per-request keys
+within a single session.
+Consumers should either use a single key for a session specified in
+the session parameters or always use per-request keys.
diff --git a/share/man/man9/crypto_session.9 b/share/man/man9/crypto_session.9
new file mode 100644
index 000000000000..bb89afa93d63
--- /dev/null
+++ b/share/man/man9/crypto_session.9
@@ -0,0 +1,245 @@
+.\" Copyright (c) 2020, Chelsio Inc
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions are met:
+.\"
+.\" 1. Redistributions of source code must retain the above copyright notice,
+.\" this list of conditions and the following disclaimer.
+.\"
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\" notice, this list of conditions and the following disclaimer in the
+.\" documentation and/or other materials provided with the distribution.
+.\"
+.\" 3. Neither the name of the Chelsio Inc nor the names of its
+.\" contributors may be used to endorse or promote products derived from
+.\" this software without specific prior written permission.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+.\" POSSIBILITY OF SUCH DAMAGE.
+.\"
+.\" * Other names and brands may be claimed as the property of others.
+.\"
+.\" $FreeBSD$
+.\"
+.Dd March 27, 2020
+.Dt CRYPTO_SESSION 9
+.Os
+.Sh NAME
+.Nm crypto_session
+.Nd state used for symmetric cryptographic services
+.Sh SYNOPSIS
+.In opencrypto/cryptodev.h
+.Ft struct auth_hash *
+.Fn crypto_auth_hash "const struct crypto_session_params *csp"
+.Ft struct enc_xform *
+.Fn crypto_cipher "const struct crypto_session_params *csp"
+.Ft const struct crypto_session_params *
+.Fn crypto_get_params "crypto_session_t cses"
+.Ft int
+.Fo crypto_newsession
+.Fa "crypto_session_t *cses"
+.Fa "const struct crypto_session_params *csp"
+.Fa "int crid"
+.Fc
+.Ft int
+.Fn crypto_freesession "crypto_session_t cses"
+.Sh DESCRIPTION
+Symmetric cryptographic operations in the kernel are associated with
+cryptographic sessions.
+Sessions hold state shared across multiple requests.
+Active sessions are associated with a single cryptographic driver.
+.Pp
+The
+.Vt crypto_session_t
+type represents an opaque reference to an active session.
+Session objects are allocated and managed by the cryptographic
+framework.
+.Pp
+New sessions are created by
+.Fn crypto_newsession .
+.Fa csp
+describes various parameters associated with the new session such as
+the algorithms to use and any session-wide keys.
+.Fa crid
+can be used to request either a specific cryptographic driver or
+classes of drivers.
+For the latter case,
+.Fa crid
+should be set to a mask of the following values:
+.Bl -tag -width "CRYPTOCAP_F_HARDWARE"
+.It Dv CRYPTOCAP_F_HARDWARE
+Request hardware drivers.
+Hardware drivers do not use the host CPU to perform operations.
+Typically, a separate co-processor performs the operations asynchronously.
+.It Dv CRYPTOCAP_F_SOFTWARE
+Request software drivers.
+Software drivers use the host CPU to perform operations.
+The kernel includes a simple, yet portable implementation of each supported
+algorithm in the
+.Xr cryptosoft 4
+driver.
+Additional software drivers may also be available on architectures which
+provide instructions designed to accelerate cryptographic operations.
+.El
+.Pp
+If both hardware and software drivers are requested,
+hardware drivers are preferred over software drivers.
+Accelerated software drivers are preferred over the baseline software driver.
+If multiple hardware drivers are available,
+the framework will distribute sessions across these drivers in a round-robin
+fashion.
+.Pp
+On success,
+.Fn crypto_newsession
+saves a reference to the newly created session in
+.Fa cses .
+.Pp
+.Fn crypto_freesession
+is used to free the resources associated with the session
+.Fa cses .
+.Pp
+.Fn crypto_auth_hash
+returns a structure describing the baseline software implementation of an
+authentication algorithm requested by
+.Fa csp .
+If
+.Fa csp
+does not specify an authentication algorithm,
+or requests an invalid algorithm,
+.Dv NULL
+is returned.
+.Pp
+.Fn crypto_cipher
+returns a structure describing the baseline software implementation of an
+encryption algorithm requested by
+.Fa csp .
+If
+.Fa csp
+does not specify an encryption algorithm,
+or requests an invalid algorithm,
+.Dv NULL
+is returned.
+.Pp
+.Fn crypto_get_params
+returns a pointer to the session parameters used by
+.Fa cses .
+.Ss Session Parameters
+Session parameters are used to describe the cryptographic operations
+performed by cryptographic requests.
+Parameters are stored in an instance of
+.Vt struct crypto_session_params .
+When initializing parameters to pass to
+.Fn crypto_newsession ,
+the entire structure should first be zeroed.
+Needed fields should then be set leaving unused fields as zero.
+This structure contains the following fields:
+.Bl -tag -width csp_cipher_klen
+.It Fa csp_mode
+Type of operation to perform.
+This field must be set to one of the following:
+.Bl -tag -width CSP_MODE_COMPRESS
+.It Dv CSP_MODE_COMPRESS
+Compress or decompress request payload.
+.Pp
+The compression algorithm is specified in
+.Fa csp_cipher_alg .
+.It Dv CSP_MODE_CIPHER
+Encrypt or decrypt request payload.
+.Pp
+The encryption algorithm is specified in
+.Fa csp_cipher_alg .
+.It Dv CSP_MODE_DIGEST
+Compute or verify a digest, or hash, of request payload.
+.Pp
+The authentication algorithm is specified in
+.Fa csp_auth_alg .
+.It Dv CSP_MODE_AEAD
+Authenticated encryption with additional data.
+Decryption operations require the digest, or tag,
+and fail if it does not match.
+.Pp
+The AEAD algorithm is specified in
+.Fa csp_cipher_alg .
+.It Dv CSP_MODE_ETA
+Encrypt-then-Authenticate.
+In this mode, encryption operations encrypt the payload and then
+compute an authentication digest over the request additional authentication
+data followed by the encrypted payload.
+Decryption operations fail without decrypting the data if the provided digest
+does not match.
+.Pp
+The encryption algorithm is specified in
+.Fa csp_cipher_alg
+and the authentication algorithm is specified in
+.Fa csp_auth_alg .
+.El
+.It Fa csp_flags
+Currently, no additional flags are defined and this field should be set to
+zero.
+.It Fa csp_ivlen
+If either the cipher or authentication algorithms require an explicit
+initialization vector (IV) or nonce,
+this specifies the length in bytes.
+All requests for a session use the same IV length.
+.It Fa csp_cipher_alg
+Encryption or compression algorithm.
+.It Fa csp_cipher_klen
+Length of encryption or decryption key in bytes.
+All requests for a session use the same key length.
+.It Fa csp_cipher_key
+Pointer to encryption or decryption key.
+If all requests for a session use request-specific keys,
+this field should be left as
+.Dv NULL .
+This pointer and associated key must remain valid for the duration of the
+crypto session.
+.It Fa csp_auth_alg
+Authentication algorithm.
+.It Fa csp_auth_klen
+Length of authentication key in bytes.
+If the authentication algorithm does not use a key,
+this field should be left as zero.
+.It Fa csp_auth_key
+Pointer to the authentication key.
+If all requests for a session use request-specific keys,
+this field should be left as
+.Dv NULL .
+This pointer and associated key must remain valid for the duration of the
+crypto session.
+.It Fa csp_auth_mlen
+The length in bytes of the digest.
+If zero, the full length of the digest is used.
+If non-zero, the first
+.Fa csp_auth_mlen
+bytes of the digest are used.
+.El
+.Sh RETURN VALUES
+.Fn crypto_newsession
+returns a non-zero value if an error occurs or zero on success.
+.Pp
+.Fn crypto_auth_hash
+and
+.Fn crypto_cipher
+return
+.Dv NULL
+if the request is valid or a pointer to a structure on success.
+.Sh SEE ALSO
+.Xr crypto 7 ,
+.Xr crypto 9 ,
+.Xr crypto_request 9
+.Sh BUGS
+The current implementation of
+.Nm crypto_freesession
+does not provide a way for the caller to know that there are no other
+references to the keys stored in the session's associated parameters.
+This function should probably sleep until any in-flight cryptographic
+operations associated with the session are completed.
diff --git a/sys/crypto/aesni/aesni.c b/sys/crypto/aesni/aesni.c
index 27ef26e43ec4..284f460b8415 100644
--- a/sys/crypto/aesni/aesni.c
+++ b/sys/crypto/aesni/aesni.c
@@ -88,16 +88,13 @@ struct aesni_softc {
(ctx) = NULL; \
} while (0)
-static int aesni_newsession(device_t, crypto_session_t cses,
- struct cryptoini *cri);
static int aesni_cipher_setup(struct aesni_session *ses,
- struct cryptoini *encini, struct cryptoini *authini);
-static int aesni_cipher_process(struct aesni_session *ses,
- struct cryptodesc *enccrd, struct cryptodesc *authcrd, struct cryptop *crp);
-static int aesni_cipher_crypt(struct aesni_session *ses,
- struct cryptodesc *enccrd, struct cryptodesc *authcrd, struct cryptop *crp);
-static int aesni_cipher_mac(struct aesni_session *ses, struct cryptodesc *crd,
- struct cryptop *crp);
+ const struct crypto_session_params *csp);
+static int aesni_cipher_process(struct aesni_session *ses, struct cryptop *crp);
+static int aesni_cipher_crypt(struct aesni_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp);
+static int aesni_cipher_mac(struct aesni_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp);
MALLOC_DEFINE(M_AESNI, "aesni_data", "AESNI Data");
@@ -170,7 +167,7 @@ aesni_attach(device_t dev)
sc = device_get_softc(dev);
sc->cid = crypto_get_driverid(dev, sizeof(struct aesni_session),
- CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SYNC);
+ CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC);
if (sc->cid < 0) {
device_printf(dev, "Could not get crypto driver id.\n");
return (ENOMEM);
@@ -187,25 +184,6 @@ aesni_attach(device_t dev)
}
detect_cpu_features(&sc->has_aes, &sc->has_sha);
- if (sc->has_aes) {
- crypto_register(sc->cid, CRYPTO_AES_CBC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_ICM, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_NIST_GCM_16, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_128_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_192_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_256_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_XTS, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_CCM_16, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_CCM_CBC_MAC, 0, 0);
- }
- if (sc->has_sha) {
- crypto_register(sc->cid, CRYPTO_SHA1, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_224, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_224_HMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_256, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_256_HMAC, 0, 0);
- }
return (0);
}
@@ -223,115 +201,125 @@ aesni_detach(device_t dev)
return (0);
}
-static int
-aesni_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+static bool
+aesni_auth_supported(struct aesni_softc *sc,
+ const struct crypto_session_params *csp)
{
- struct aesni_softc *sc;
- struct aesni_session *ses;
- struct cryptoini *encini, *authini;
- bool gcm_hash, gcm;
- bool cbc_hash, ccm;
- int error;
- KASSERT(cses != NULL, ("EDOOFUS"));
- if (cri == NULL) {
- CRYPTDEB("no cri");
- return (EINVAL);
+ if (!sc->has_sha)
+ return (false);
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1:
+ case CRYPTO_SHA2_224:
+ case CRYPTO_SHA2_256:
+ case CRYPTO_SHA1_HMAC:
+ case CRYPTO_SHA2_224_HMAC:
+ case CRYPTO_SHA2_256_HMAC:
+ break;
+ default:
+ return (false);
}
- sc = device_get_softc(dev);
+ return (true);
+}
- ses = crypto_get_driver_session(cses);
+static bool
+aesni_cipher_supported(struct aesni_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ if (!sc->has_aes)
+ return (false);
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ case CRYPTO_AES_ICM:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ return (sc->has_aes);
+ case CRYPTO_AES_XTS:
+ if (csp->csp_ivlen != AES_XTS_IV_LEN)
+ return (false);
+ return (sc->has_aes);
+ default:
+ return (false);
+ }
+}
- authini = NULL;
- encini = NULL;
- gcm = false;
- gcm_hash = false;
- ccm = cbc_hash = false;
+static int
+aesni_probesession(device_t dev, const struct crypto_session_params *csp)
+{
+ struct aesni_softc *sc;
- for (; cri != NULL; cri = cri->cri_next) {
- switch (cri->cri_alg) {
+ sc = device_get_softc(dev);
+ if (csp->csp_flags != 0)
+ return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!aesni_auth_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!aesni_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_AEAD:
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_NIST_GCM_16:
- case CRYPTO_AES_CCM_16:
- if (cri->cri_alg == CRYPTO_AES_NIST_GCM_16) {
- gcm = true;
- } else if (cri->cri_alg == CRYPTO_AES_CCM_16) {
- ccm = true;
- }
- /* FALLTHROUGH */
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- case CRYPTO_AES_XTS:
- if (!sc->has_aes)
- goto unhandled;
- if (encini != NULL) {
- CRYPTDEB("encini already set");
+ if (csp->csp_auth_mlen != 0 &&
+ csp->csp_auth_mlen != GMAC_DIGEST_LEN)
return (EINVAL);
- }
- encini = cri;
- break;
- case CRYPTO_AES_CCM_CBC_MAC:
- cbc_hash = true;
- authini = cri;
- break;
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- /*
- * nothing to do here, maybe in the future cache some
- * values for GHASH
- */
- if (authini != NULL) {
- CRYPTDEB("authini already set");
+ if (csp->csp_ivlen != AES_GCM_IV_LEN ||
+ !sc->has_aes)
return (EINVAL);
- }
- gcm_hash = true;
- authini = cri;
break;
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_224:
- case CRYPTO_SHA2_224_HMAC:
- case CRYPTO_SHA2_256:
- case CRYPTO_SHA2_256_HMAC:
- if (!sc->has_sha)
- goto unhandled;
- if (authini != NULL) {
- CRYPTDEB("authini already set");
+ case CRYPTO_AES_CCM_16:
+ if (csp->csp_auth_mlen != 0 &&
+ csp->csp_auth_mlen != AES_CBC_MAC_HASH_LEN)
+ return (EINVAL);
+ if (csp->csp_ivlen != AES_CCM_IV_LEN ||
+ !sc->has_aes)
return (EINVAL);
- }
- authini = cri;
break;
default:
-unhandled:
- CRYPTDEB("unhandled algorithm");
return (EINVAL);
}
- }
- if (encini == NULL && authini == NULL) {
- CRYPTDEB("no cipher");
- return (EINVAL);
- }
- /*
- * GMAC algorithms are only supported with simultaneous GCM. Likewise
- * GCM is not supported without GMAC.
- */
- if (gcm_hash != gcm) {
- CRYPTDEB("gcm_hash != gcm");
+ break;
+ case CSP_MODE_ETA:
+ if (!aesni_auth_supported(sc, csp) ||
+ !aesni_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
}
- if (cbc_hash != ccm) {
- CRYPTDEB("cbc_hash != ccm");
- return (EINVAL);
- }
+ return (CRYPTODEV_PROBE_ACCEL_SOFTWARE);
+}
- if (encini != NULL)
- ses->algo = encini->cri_alg;
- if (authini != NULL)
- ses->auth_algo = authini->cri_alg;
+static int
+aesni_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct aesni_softc *sc;
+ struct aesni_session *ses;
+ int error;
+
+ sc = device_get_softc(dev);
- error = aesni_cipher_setup(ses, encini, authini);
+ ses = crypto_get_driver_session(cses);
+
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ case CSP_MODE_CIPHER:
+ case CSP_MODE_AEAD:
+ case CSP_MODE_ETA:
+ break;
+ default:
+ return (EINVAL);
+ }
+ error = aesni_cipher_setup(ses, csp);
if (error != 0) {
CRYPTDEB("setup failed");
return (error);
@@ -344,108 +332,31 @@ static int
aesni_process(device_t dev, struct cryptop *crp, int hint __unused)
{
struct aesni_session *ses;
- struct cryptodesc *crd, *enccrd, *authcrd;
- int error, needauth;
-
- ses = NULL;
- error = 0;
- enccrd = NULL;
- authcrd = NULL;
- needauth = 0;
-
- /* Sanity check. */
- if (crp == NULL)
- return (EINVAL);
-
- if (crp->crp_callback == NULL || crp->crp_desc == NULL ||
- crp->crp_session == NULL) {
- error = EINVAL;
- goto out;
- }
-
- for (crd = crp->crp_desc; crd != NULL; crd = crd->crd_next) {
- switch (crd->crd_alg) {
- case CRYPTO_AES_NIST_GCM_16:
- case CRYPTO_AES_CCM_16:
- needauth = 1;
- /* FALLTHROUGH */
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- case CRYPTO_AES_XTS:
- if (enccrd != NULL) {
- error = EINVAL;
- goto out;
- }
- enccrd = crd;
- break;
-
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- case CRYPTO_AES_CCM_CBC_MAC:
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_224:
- case CRYPTO_SHA2_224_HMAC:
- case CRYPTO_SHA2_256:
- case CRYPTO_SHA2_256_HMAC:
- if (authcrd != NULL) {
- error = EINVAL;
- goto out;
- }
- authcrd = crd;
- break;
-
- default:
- error = EINVAL;
- goto out;
- }
- }
-
- if ((enccrd == NULL && authcrd == NULL) ||
- (needauth && authcrd == NULL)) {
- error = EINVAL;
- goto out;
- }
-
- /* CBC & XTS can only handle full blocks for now */
- if (enccrd != NULL && (enccrd->crd_alg == CRYPTO_AES_CBC ||
- enccrd->crd_alg == CRYPTO_AES_XTS) &&
- (enccrd->crd_len % AES_BLOCK_LEN) != 0) {
- error = EINVAL;
- goto out;
- }
+ int error;
ses = crypto_get_driver_session(crp->crp_session);
- KASSERT(ses != NULL, ("EDOOFUS"));
- error = aesni_cipher_process(ses, enccrd, authcrd, crp);
- if (error != 0)
- goto out;
+ error = aesni_cipher_process(ses, crp);
-out:
crp->crp_etype = error;
crypto_done(crp);
- return (error);
+ return (0);
}
static uint8_t *
-aesni_cipher_alloc(struct cryptodesc *enccrd, struct cryptop *crp,
- bool *allocated)
+aesni_cipher_alloc(struct cryptop *crp, int start, int length, bool *allocated)
{
uint8_t *addr;
- addr = crypto_contiguous_subsegment(crp->crp_flags,
- crp->crp_buf, enccrd->crd_skip, enccrd->crd_len);
+ addr = crypto_contiguous_subsegment(crp, start, length);
if (addr != NULL) {
*allocated = false;
return (addr);
}
- addr = malloc(enccrd->crd_len, M_AESNI, M_NOWAIT);
+ addr = malloc(length, M_AESNI, M_NOWAIT);
if (addr != NULL) {
*allocated = true;
- crypto_copydata(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, addr);
+ crypto_copydata(crp, start, length, addr);
} else
*allocated = false;
return (addr);
@@ -457,6 +368,7 @@ static device_method_t aesni_methods[] = {
DEVMETHOD(device_attach, aesni_attach),
DEVMETHOD(device_detach, aesni_detach),
+ DEVMETHOD(cryptodev_probesession, aesni_probesession),
DEVMETHOD(cryptodev_newsession, aesni_newsession),
DEVMETHOD(cryptodev_process, aesni_process),
@@ -474,63 +386,7 @@ DRIVER_MODULE(aesni, nexus, aesni_driver, aesni_devclass, 0, 0);
MODULE_VERSION(aesni, 1);
MODULE_DEPEND(aesni, crypto, 1, 1, 1);
-static int
-aesni_authprepare(struct aesni_session *ses, int klen, const void *cri_key)
-{
- int keylen;
-
- if (klen % 8 != 0)
- return (EINVAL);
- keylen = klen / 8;
- if (keylen > sizeof(ses->hmac_key))
- return (EINVAL);
- if (ses->auth_algo == CRYPTO_SHA1 && keylen > 0)
- return (EINVAL);
- memcpy(ses->hmac_key, cri_key, keylen);
- return (0);
-}
-
-static int
-aesni_cipher_setup(struct aesni_session *ses, struct cryptoini *encini,
- struct cryptoini *authini)
-{
- struct fpu_kern_ctx *ctx;
- int kt, ctxidx, error;
-
- switch (ses->auth_algo) {
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_224:
- case CRYPTO_SHA2_224_HMAC:
- case CRYPTO_SHA2_256:
- case CRYPTO_SHA2_256_HMAC:
- error = aesni_authprepare(ses, authini->cri_klen,
- authini->cri_key);
- if (error != 0)
- return (error);
- ses->mlen = authini->cri_mlen;
- }
-
- kt = is_fpu_kern_thread(0) || (encini == NULL);
- if (!kt) {
- ACQUIRE_CTX(ctxidx, ctx);
- fpu_kern_enter(curthread, ctx,
- FPU_KERN_NORMAL | FPU_KERN_KTHR);
- }
-
- error = 0;
- if (encini != NULL)
- error = aesni_cipher_setup_common(ses, encini->cri_key,
- encini->cri_klen);
-
- if (!kt) {
- fpu_kern_leave(curthread, ctx);
- RELEASE_CTX(ctxidx, ctx);
- }
- return (error);
-}
-
-static int
+static void
intel_sha1_update(void *vctx, const void *vdata, u_int datalen)
{
struct sha1_ctxt *ctx = vctx;
@@ -563,7 +419,6 @@ intel_sha1_update(void *vctx, const void *vdata, u_int datalen)
intel_sha1_step(ctx->h.b32, (void *)ctx->m.b8, 1);
off += copysiz;
}
- return (0);
}
static void
@@ -578,7 +433,7 @@ SHA1_Finalize_fn(void *digest, void *ctx)
sha1_result(ctx, digest);
}
-static int
+static void
intel_sha256_update(void *vctx, const void *vdata, u_int len)
{
SHA256_CTX *ctx = vctx;
@@ -599,7 +454,7 @@ intel_sha256_update(void *vctx, const void *vdata, u_int len)
/* Handle the case where we don't need to perform any transforms */
if (len < 64 - r) {
memcpy(&ctx->buf[r], src, len);
- return (0);
+ return;
}
/* Finish the current block */
@@ -618,7 +473,6 @@ intel_sha256_update(void *vctx, const void *vdata, u_int len)
/* Copy left over data into buffer */
memcpy(ctx->buf, src, len);
- return (0);
}
static void
@@ -645,42 +499,145 @@ SHA256_Finalize_fn(void *digest, void *ctx)
SHA256_Final(digest, ctx);
}
-/*
- * Compute the HASH( (key ^ xorbyte) || buf )
- */
-static void
-hmac_internal(void *ctx, uint32_t *res,
- int (*update)(void *, const void *, u_int),
- void (*finalize)(void *, void *), uint8_t *key, uint8_t xorbyte,
- const void *buf, size_t off, size_t buflen, int crpflags)
+static int
+aesni_authprepare(struct aesni_session *ses, int klen)
{
- size_t i;
- for (i = 0; i < 64; i++)
- key[i] ^= xorbyte;
- update(ctx, key, 64);
- for (i = 0; i < 64; i++)
- key[i] ^= xorbyte;
+ if (klen > SHA1_BLOCK_LEN)
+ return (EINVAL);
+ if ((ses->hmac && klen == 0) || (!ses->hmac && klen != 0))
+ return (EINVAL);
+ return (0);
+}
+
+static int
+aesni_cipherprepare(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_ICM:
+ case CRYPTO_AES_NIST_GCM_16:
+ case CRYPTO_AES_CCM_16:
+ case CRYPTO_AES_CBC:
+ switch (csp->csp_cipher_klen * 8) {
+ case 128:
+ case 192:
+ case 256:
+ break;
+ default:
+ CRYPTDEB("invalid CBC/ICM/GCM key length");
+ return (EINVAL);
+ }
+ break;
+ case CRYPTO_AES_XTS:
+ switch (csp->csp_cipher_klen * 8) {
+ case 256:
+ case 512:
+ break;
+ default:
+ CRYPTDEB("invalid XTS key length");
+ return (EINVAL);
+ }
+ break;
+ default:
+ return (EINVAL);
+ }
+ return (0);
+}
+
+static int
+aesni_cipher_setup(struct aesni_session *ses,
+ const struct crypto_session_params *csp)
+{
+ struct fpu_kern_ctx *ctx;
+ int kt, ctxidx, error;
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1_HMAC:
+ ses->hmac = true;
+ /* FALLTHROUGH */
+ case CRYPTO_SHA1:
+ ses->hash_len = SHA1_HASH_LEN;
+ ses->hash_init = SHA1_Init_fn;
+ ses->hash_update = intel_sha1_update;
+ ses->hash_finalize = SHA1_Finalize_fn;
+ break;
+ case CRYPTO_SHA2_224_HMAC:
+ ses->hmac = true;
+ /* FALLTHROUGH */
+ case CRYPTO_SHA2_224:
+ ses->hash_len = SHA2_224_HASH_LEN;
+ ses->hash_init = SHA224_Init_fn;
+ ses->hash_update = intel_sha256_update;
+ ses->hash_finalize = SHA224_Finalize_fn;
+ break;
+ case CRYPTO_SHA2_256_HMAC:
+ ses->hmac = true;
+ /* FALLTHROUGH */
+ case CRYPTO_SHA2_256:
+ ses->hash_len = SHA2_256_HASH_LEN;
+ ses->hash_init = SHA256_Init_fn;
+ ses->hash_update = intel_sha256_update;
+ ses->hash_finalize = SHA256_Finalize_fn;
+ break;
+ }
+
+ if (ses->hash_len != 0) {
+ if (csp->csp_auth_mlen == 0)
+ ses->mlen = ses->hash_len;
+ else
+ ses->mlen = csp->csp_auth_mlen;
- crypto_apply(crpflags, __DECONST(void *, buf), off, buflen,
- __DECONST(int (*)(void *, void *, u_int), update), ctx);
- finalize(res, ctx);
+ error = aesni_authprepare(ses, csp->csp_auth_klen);
+ if (error != 0)
+ return (error);
+ }
+
+ error = aesni_cipherprepare(csp);
+ if (error != 0)
+ return (error);
+
+ kt = is_fpu_kern_thread(0) || (csp->csp_cipher_alg == 0);
+ if (!kt) {
+ ACQUIRE_CTX(ctxidx, ctx);
+ fpu_kern_enter(curthread, ctx,
+ FPU_KERN_NORMAL | FPU_KERN_KTHR);
+ }
+
+ error = 0;
+ if (csp->csp_cipher_key != NULL)
+ aesni_cipher_setup_common(ses, csp, csp->csp_cipher_key,
+ csp->csp_cipher_klen);
+
+ if (!kt) {
+ fpu_kern_leave(curthread, ctx);
+ RELEASE_CTX(ctxidx, ctx);
+ }
+ return (error);
}
static int
-aesni_cipher_process(struct aesni_session *ses, struct cryptodesc *enccrd,
- struct cryptodesc *authcrd, struct cryptop *crp)
+aesni_cipher_process(struct aesni_session *ses, struct cryptop *crp)
{
+ const struct crypto_session_params *csp;
struct fpu_kern_ctx *ctx;
int error, ctxidx;
bool kt;
- if (enccrd != NULL) {
- if ((enccrd->crd_alg == CRYPTO_AES_ICM ||
- enccrd->crd_alg == CRYPTO_AES_CCM_16 ||
- enccrd->crd_alg == CRYPTO_AES_NIST_GCM_16) &&
- (enccrd->crd_flags & CRD_F_IV_EXPLICIT) == 0)
+ csp = crypto_get_params(crp->crp_session);
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_ICM:
+ case CRYPTO_AES_NIST_GCM_16:
+ case CRYPTO_AES_CCM_16:
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0)
return (EINVAL);
+ break;
+ case CRYPTO_AES_CBC:
+ case CRYPTO_AES_XTS:
+ /* CBC & XTS can only handle full blocks for now */
+ if ((crp->crp_payload_length % AES_BLOCK_LEN) != 0)
+ return (EINVAL);
+ break;
}
ctx = NULL;
@@ -694,28 +651,21 @@ aesni_cipher_process(struct aesni_session *ses, struct cryptodesc *enccrd,
}
/* Do work */
- if (enccrd != NULL && authcrd != NULL) {
- /* Perform the first operation */
- if (crp->crp_desc == enccrd)
- error = aesni_cipher_crypt(ses, enccrd, authcrd, crp);
- else
- error = aesni_cipher_mac(ses, authcrd, crp);
- if (error != 0)
- goto out;
- /* Perform the second operation */
- if (crp->crp_desc == enccrd)
- error = aesni_cipher_mac(ses, authcrd, crp);
- else
- error = aesni_cipher_crypt(ses, enccrd, authcrd, crp);
- } else if (enccrd != NULL)
- error = aesni_cipher_crypt(ses, enccrd, authcrd, crp);
+ if (csp->csp_mode == CSP_MODE_ETA) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ error = aesni_cipher_crypt(ses, crp, csp);
+ if (error == 0)
+ error = aesni_cipher_mac(ses, crp, csp);
+ } else {
+ error = aesni_cipher_mac(ses, crp, csp);
+ if (error == 0)
+ error = aesni_cipher_crypt(ses, crp, csp);
+ }
+ } else if (csp->csp_mode == CSP_MODE_DIGEST)
+ error = aesni_cipher_mac(ses, crp, csp);
else
- error = aesni_cipher_mac(ses, authcrd, crp);
+ error = aesni_cipher_crypt(ses, crp, csp);
- if (error != 0)
- goto out;
-
-out:
if (!kt) {
fpu_kern_leave(curthread, ctx);
RELEASE_CTX(ctxidx, ctx);
@@ -724,28 +674,24 @@ out:
}
static int
-aesni_cipher_crypt(struct aesni_session *ses, struct cryptodesc *enccrd,
- struct cryptodesc *authcrd, struct cryptop *crp)
+aesni_cipher_crypt(struct aesni_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp)
{
uint8_t iv[AES_BLOCK_LEN], tag[GMAC_DIGEST_LEN], *buf, *authbuf;
- int error, ivlen;
+ int error;
bool encflag, allocated, authallocated;
- KASSERT((ses->algo != CRYPTO_AES_NIST_GCM_16 &&
- ses->algo != CRYPTO_AES_CCM_16) || authcrd != NULL,
- ("AES_NIST_GCM_16/AES_CCM_16 must include MAC descriptor"));
-
- ivlen = 0;
- authbuf = NULL;
-
- buf = aesni_cipher_alloc(enccrd, crp, &allocated);
+ buf = aesni_cipher_alloc(crp, crp->crp_payload_start,
+ crp->crp_payload_length, &allocated);
if (buf == NULL)
return (ENOMEM);
authallocated = false;
- if (ses->algo == CRYPTO_AES_NIST_GCM_16 ||
- ses->algo == CRYPTO_AES_CCM_16) {
- authbuf = aesni_cipher_alloc(authcrd, crp, &authallocated);
+ authbuf = NULL;
+ if (csp->csp_cipher_alg == CRYPTO_AES_NIST_GCM_16 ||
+ csp->csp_cipher_alg == CRYPTO_AES_CCM_16) {
+ authbuf = aesni_cipher_alloc(crp, crp->crp_aad_start,
+ crp->crp_aad_length, &authallocated);
if (authbuf == NULL) {
error = ENOMEM;
goto out;
@@ -753,221 +699,161 @@ aesni_cipher_crypt(struct aesni_session *ses, struct cryptodesc *enccrd,
}
error = 0;
- encflag = (enccrd->crd_flags & CRD_F_ENCRYPT) == CRD_F_ENCRYPT;
- if ((enccrd->crd_flags & CRD_F_KEY_EXPLICIT) != 0) {
- error = aesni_cipher_setup_common(ses, enccrd->crd_key,
- enccrd->crd_klen);
- if (error != 0)
- goto out;
- }
-
- switch (enccrd->crd_alg) {
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- ivlen = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_XTS:
- ivlen = 8;
- break;
- case CRYPTO_AES_NIST_GCM_16:
- case CRYPTO_AES_CCM_16:
- ivlen = 12; /* should support arbitarily larger */
- break;
- }
+ encflag = CRYPTO_OP_IS_ENCRYPT(crp->crp_op);
+ if (crp->crp_cipher_key != NULL)
+ aesni_cipher_setup_common(ses, csp, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
/* Setup iv */
- if (encflag) {
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, iv, ivlen);
- else
- arc4rand(iv, ivlen, 0);
-
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivlen, iv);
- } else {
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, iv, ivlen);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivlen, iv);
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, csp->csp_ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
- switch (ses->algo) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_CBC:
if (encflag)
aesni_encrypt_cbc(ses->rounds, ses->enc_schedule,
- enccrd->crd_len, buf, buf, iv);
+ crp->crp_payload_length, buf, buf, iv);
else
aesni_decrypt_cbc(ses->rounds, ses->dec_schedule,
- enccrd->crd_len, buf, iv);
+ crp->crp_payload_length, buf, iv);
break;
case CRYPTO_AES_ICM:
/* encryption & decryption are the same */
aesni_encrypt_icm(ses->rounds, ses->enc_schedule,
- enccrd->crd_len, buf, buf, iv);
+ crp->crp_payload_length, buf, buf, iv);
break;
case CRYPTO_AES_XTS:
if (encflag)
aesni_encrypt_xts(ses->rounds, ses->enc_schedule,
- ses->xts_schedule, enccrd->crd_len, buf, buf,
- iv);
+ ses->xts_schedule, crp->crp_payload_length, buf,
+ buf, iv);
else
aesni_decrypt_xts(ses->rounds, ses->dec_schedule,
- ses->xts_schedule, enccrd->crd_len, buf, buf,
- iv);
+ ses->xts_schedule, crp->crp_payload_length, buf,
+ buf, iv);
break;
case CRYPTO_AES_NIST_GCM_16:
- if (!encflag)
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- authcrd->crd_inject, sizeof(tag), tag);
- else
- bzero(tag, sizeof tag);
-
if (encflag) {
+ memset(tag, 0, sizeof(tag));
AES_GCM_encrypt(buf, buf, authbuf, iv, tag,
- enccrd->crd_len, authcrd->crd_len, ivlen,
- ses->enc_schedule, ses->rounds);
-
- if (authcrd != NULL)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- authcrd->crd_inject, sizeof(tag), tag);
+ crp->crp_payload_length, crp->crp_aad_length,
+ csp->csp_ivlen, ses->enc_schedule, ses->rounds);
+ crypto_copyback(crp, crp->crp_digest_start, sizeof(tag),
+ tag);
} else {
+ crypto_copydata(crp, crp->crp_digest_start, sizeof(tag),
+ tag);
if (!AES_GCM_decrypt(buf, buf, authbuf, iv, tag,
- enccrd->crd_len, authcrd->crd_len, ivlen,
- ses->enc_schedule, ses->rounds))
+ crp->crp_payload_length, crp->crp_aad_length,
+ csp->csp_ivlen, ses->enc_schedule, ses->rounds))
error = EBADMSG;
}
break;
case CRYPTO_AES_CCM_16:
- if (!encflag)
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- authcrd->crd_inject, sizeof(tag), tag);
- else
- bzero(tag, sizeof tag);
if (encflag) {
+ memset(tag, 0, sizeof(tag));
AES_CCM_encrypt(buf, buf, authbuf, iv, tag,
- enccrd->crd_len, authcrd->crd_len, ivlen,
- ses->enc_schedule, ses->rounds);
- if (authcrd != NULL)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- authcrd->crd_inject, sizeof(tag), tag);
+ crp->crp_payload_length, crp->crp_aad_length,
+ csp->csp_ivlen, ses->enc_schedule, ses->rounds);
+ crypto_copyback(crp, crp->crp_digest_start, sizeof(tag),
+ tag);
} else {
+ crypto_copydata(crp, crp->crp_digest_start, sizeof(tag),
+ tag);
if (!AES_CCM_decrypt(buf, buf, authbuf, iv, tag,
- enccrd->crd_len, authcrd->crd_len, ivlen,
- ses->enc_schedule, ses->rounds))
+ crp->crp_payload_length, crp->crp_aad_length,
+ csp->csp_ivlen, ses->enc_schedule, ses->rounds))
error = EBADMSG;
}
break;
}
if (allocated && error == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, buf);
+ crypto_copyback(crp, crp->crp_payload_start,
+ crp->crp_payload_length, buf);
out:
if (allocated) {
- explicit_bzero(buf, enccrd->crd_len);
+ explicit_bzero(buf, crp->crp_payload_length);
free(buf, M_AESNI);
}
if (authallocated) {
- explicit_bzero(authbuf, authcrd->crd_len);
+ explicit_bzero(authbuf, crp->crp_aad_length);
free(authbuf, M_AESNI);
}
return (error);
}
static int
-aesni_cipher_mac(struct aesni_session *ses, struct cryptodesc *crd,
- struct cryptop *crp)
+aesni_cipher_mac(struct aesni_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp)
{
union {
struct SHA256Context sha2 __aligned(16);
struct sha1_ctxt sha1 __aligned(16);
} sctx;
+ uint8_t hmac_key[SHA1_BLOCK_LEN] __aligned(16);
uint32_t res[SHA2_256_HASH_LEN / sizeof(uint32_t)];
- int hashlen, error;
- void *ctx;
- void (*InitFn)(void *);
- int (*UpdateFn)(void *, const void *, unsigned);
- void (*FinalizeFn)(void *, void *);
-
- bool hmac;
-
- if ((crd->crd_flags & ~CRD_F_KEY_EXPLICIT) != 0) {
- CRYPTDEB("%s: Unsupported MAC flags: 0x%x", __func__,
- (crd->crd_flags & ~CRD_F_KEY_EXPLICIT));
- return (EINVAL);
- }
- if ((crd->crd_flags & CRD_F_KEY_EXPLICIT) != 0) {
- error = aesni_authprepare(ses, crd->crd_klen, crd->crd_key);
- if (error != 0)
- return (error);
- }
-
- hmac = false;
- switch (ses->auth_algo) {
- case CRYPTO_SHA1_HMAC:
- hmac = true;
- /* FALLTHROUGH */
- case CRYPTO_SHA1:
- hashlen = SHA1_HASH_LEN;
- InitFn = SHA1_Init_fn;
- UpdateFn = intel_sha1_update;
- FinalizeFn = SHA1_Finalize_fn;
- ctx = &sctx.sha1;
- break;
+ uint32_t res2[SHA2_256_HASH_LEN / sizeof(uint32_t)];
+ const uint8_t *key;
+ int i, keylen;
- case CRYPTO_SHA2_256_HMAC:
- hmac = true;
- /* FALLTHROUGH */
- case CRYPTO_SHA2_256:
- hashlen = SHA2_256_HASH_LEN;
- InitFn = SHA256_Init_fn;
- UpdateFn = intel_sha256_update;
- FinalizeFn = SHA256_Finalize_fn;
- ctx = &sctx.sha2;
- break;
-
- case CRYPTO_SHA2_224_HMAC:
- hmac = true;
- /* FALLTHROUGH */
- case CRYPTO_SHA2_224:
- hashlen = SHA2_224_HASH_LEN;
- InitFn = SHA224_Init_fn;
- UpdateFn = intel_sha256_update;
- FinalizeFn = SHA224_Finalize_fn;
- ctx = &sctx.sha2;
- break;
- default:
- /*
- * AES-GMAC authentication is verified while processing the
- * enccrd
- */
- return (0);
- }
+ if (crp->crp_auth_key != NULL)
+ key = crp->crp_auth_key;
+ else
+ key = csp->csp_auth_key;
+ keylen = csp->csp_auth_klen;
- if (hmac) {
+ if (ses->hmac) {
/* Inner hash: (K ^ IPAD) || data */
- InitFn(ctx);
- hmac_internal(ctx, res, UpdateFn, FinalizeFn, ses->hmac_key,
- 0x36, crp->crp_buf, crd->crd_skip, crd->crd_len,
- crp->crp_flags);
+ ses->hash_init(&sctx);
+ for (i = 0; i < keylen; i++)
+ hmac_key[i] = key[i] ^ HMAC_IPAD_VAL;
+ for (i = keylen; i < sizeof(hmac_key); i++)
+ hmac_key[i] = 0 ^ HMAC_IPAD_VAL;
+ ses->hash_update(&sctx, hmac_key, sizeof(hmac_key));
+
+ crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length,
+ __DECONST(int (*)(void *, void *, u_int), ses->hash_update),
+ &sctx);
+ crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length,
+ __DECONST(int (*)(void *, void *, u_int), ses->hash_update),
+ &sctx);
+ ses->hash_finalize(res, &sctx);
+
/* Outer hash: (K ^ OPAD) || inner hash */
- InitFn(ctx);
- hmac_internal(ctx, res, UpdateFn, FinalizeFn, ses->hmac_key,
- 0x5C, res, 0, hashlen, 0);
+ ses->hash_init(&sctx);
+ for (i = 0; i < keylen; i++)
+ hmac_key[i] = key[i] ^ HMAC_OPAD_VAL;
+ for (i = keylen; i < sizeof(hmac_key); i++)
+ hmac_key[i] = 0 ^ HMAC_OPAD_VAL;
+ ses->hash_update(&sctx, hmac_key, sizeof(hmac_key));
+ ses->hash_update(&sctx, res, ses->hash_len);
+ ses->hash_finalize(res, &sctx);
} else {
- InitFn(ctx);
- crypto_apply(crp->crp_flags, crp->crp_buf, crd->crd_skip,
- crd->crd_len, __DECONST(int (*)(void *, void *, u_int),
- UpdateFn), ctx);
- FinalizeFn(res, ctx);
- }
+ ses->hash_init(&sctx);
+
+ crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length,
+ __DECONST(int (*)(void *, void *, u_int), ses->hash_update),
+ &sctx);
+ crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length,
+ __DECONST(int (*)(void *, void *, u_int), ses->hash_update),
+ &sctx);
- if (ses->mlen != 0 && ses->mlen < hashlen)
- hashlen = ses->mlen;
+ ses->hash_finalize(res, &sctx);
+ }
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject, hashlen,
- (void *)res);
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, ses->mlen, res2);
+ if (timingsafe_bcmp(res, res2, ses->mlen) != 0)
+ return (EBADMSG);
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, ses->mlen, res);
return (0);
}
diff --git a/sys/crypto/aesni/aesni.h b/sys/crypto/aesni/aesni.h
index eeb5b4361879..949ae2b7ddba 100644
--- a/sys/crypto/aesni/aesni.h
+++ b/sys/crypto/aesni/aesni.h
@@ -56,16 +56,16 @@ struct aesni_session {
uint8_t enc_schedule[AES_SCHED_LEN] __aligned(16);
uint8_t dec_schedule[AES_SCHED_LEN] __aligned(16);
uint8_t xts_schedule[AES_SCHED_LEN] __aligned(16);
- /* Same as the SHA256 Blocksize. */
- uint8_t hmac_key[SHA1_BLOCK_LEN] __aligned(16);
- int algo;
int rounds;
/* uint8_t *ses_ictx; */
/* uint8_t *ses_octx; */
- /* int ses_mlen; */
int used;
- int auth_algo;
int mlen;
+ int hash_len;
+ void (*hash_init)(void *);
+ void (*hash_update)(void *, const void *, unsigned);
+ void (*hash_finalize)(void *, void *);
+ bool hmac;
};
/*
@@ -120,7 +120,7 @@ int AES_CCM_decrypt(const unsigned char *in, unsigned char *out,
const unsigned char *addt, const unsigned char *ivec,
const unsigned char *tag, uint32_t nbytes, uint32_t abytes, int ibytes,
const unsigned char *key, int nr);
-int aesni_cipher_setup_common(struct aesni_session *ses, const uint8_t *key,
- int keylen);
+void aesni_cipher_setup_common(struct aesni_session *ses,
+ const struct crypto_session_params *csp, const uint8_t *key, int keylen);
#endif /* _AESNI_H_ */
diff --git a/sys/crypto/aesni/aesni_wrap.c b/sys/crypto/aesni/aesni_wrap.c
index a8a8ae749c77..95f7e191d00d 100644
--- a/sys/crypto/aesni/aesni_wrap.c
+++ b/sys/crypto/aesni/aesni_wrap.c
@@ -435,51 +435,37 @@ aesni_decrypt_xts(int rounds, const void *data_schedule,
iv, 0);
}
-int
-aesni_cipher_setup_common(struct aesni_session *ses, const uint8_t *key,
- int keylen)
+void
+aesni_cipher_setup_common(struct aesni_session *ses,
+ const struct crypto_session_params *csp, const uint8_t *key, int keylen)
{
int decsched;
decsched = 1;
- switch (ses->algo) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_ICM:
case CRYPTO_AES_NIST_GCM_16:
case CRYPTO_AES_CCM_16:
decsched = 0;
- /* FALLTHROUGH */
- case CRYPTO_AES_CBC:
- switch (keylen) {
- case 128:
- ses->rounds = AES128_ROUNDS;
- break;
- case 192:
- ses->rounds = AES192_ROUNDS;
- break;
- case 256:
- ses->rounds = AES256_ROUNDS;
- break;
- default:
- CRYPTDEB("invalid CBC/ICM/GCM key length");
- return (EINVAL);
- }
break;
- case CRYPTO_AES_XTS:
- switch (keylen) {
- case 256:
- ses->rounds = AES128_ROUNDS;
- break;
- case 512:
- ses->rounds = AES256_ROUNDS;
- break;
- default:
- CRYPTDEB("invalid XTS key length");
- return (EINVAL);
- }
+ }
+
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS)
+ keylen /= 2;
+
+ switch (keylen * 8) {
+ case 128:
+ ses->rounds = AES128_ROUNDS;
+ break;
+ case 192:
+ ses->rounds = AES192_ROUNDS;
+ break;
+ case 256:
+ ses->rounds = AES256_ROUNDS;
break;
default:
- return (EINVAL);
+ panic("shouldn't happen");
}
aesni_set_enckey(key, ses->enc_schedule, ses->rounds);
@@ -487,9 +473,7 @@ aesni_cipher_setup_common(struct aesni_session *ses, const uint8_t *key,
aesni_set_deckey(ses->enc_schedule, ses->dec_schedule,
ses->rounds);
- if (ses->algo == CRYPTO_AES_XTS)
- aesni_set_enckey(key + keylen / 16, ses->xts_schedule,
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS)
+ aesni_set_enckey(key + keylen, ses->xts_schedule,
ses->rounds);
-
- return (0);
}
diff --git a/sys/crypto/armv8/armv8_crypto.c b/sys/crypto/armv8/armv8_crypto.c
index 64f2240b0e43..caaecc254867 100644
--- a/sys/crypto/armv8/armv8_crypto.c
+++ b/sys/crypto/armv8/armv8_crypto.c
@@ -85,7 +85,7 @@ static struct fpu_kern_ctx **ctx_vfp;
} while (0)
static int armv8_crypto_cipher_process(struct armv8_crypto_session *,
- struct cryptodesc *, struct cryptop *);
+ struct cryptop *);
MALLOC_DEFINE(M_ARMV8_CRYPTO, "armv8_crypto", "ARMv8 Crypto Data");
@@ -131,7 +131,7 @@ armv8_crypto_attach(device_t dev)
sc->dieing = 0;
sc->cid = crypto_get_driverid(dev, sizeof(struct armv8_crypto_session),
- CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SYNC);
+ CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC);
if (sc->cid < 0) {
device_printf(dev, "Could not get crypto driver id.\n");
return (ENOMEM);
@@ -149,8 +149,6 @@ armv8_crypto_attach(device_t dev)
mtx_init(&ctx_mtx[i], "armv8cryptoctx", NULL, MTX_DEF|MTX_NEW);
}
- crypto_register(sc->cid, CRYPTO_AES_CBC, 0, 0);
-
return (0);
}
@@ -185,83 +183,74 @@ armv8_crypto_detach(device_t dev)
}
static int
-armv8_crypto_cipher_setup(struct armv8_crypto_session *ses,
- struct cryptoini *encini)
+armv8_crypto_probesession(device_t dev,
+ const struct crypto_session_params *csp)
{
- int i;
- switch (ses->algo) {
- case CRYPTO_AES_CBC:
- switch (encini->cri_klen) {
- case 128:
- ses->rounds = AES128_ROUNDS;
- break;
- case 192:
- ses->rounds = AES192_ROUNDS;
- break;
- case 256:
- ses->rounds = AES256_ROUNDS;
+ if (csp->csp_flags != 0)
+ return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_CIPHER:
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (EINVAL);
+ switch (csp->csp_cipher_klen * 8) {
+ case 128:
+ case 192:
+ case 256:
+ break;
+ default:
+ return (EINVAL);
+ }
break;
default:
- CRYPTDEB("invalid CBC/ICM/GCM key length");
return (EINVAL);
}
- break;
default:
return (EINVAL);
}
+ return (CRYPTODEV_PROBE_ACCEL_SOFTWARE);
+}
+
+static void
+armv8_crypto_cipher_setup(struct armv8_crypto_session *ses,
+ const struct crypto_session_params *csp)
+{
+ int i;
- rijndaelKeySetupEnc(ses->enc_schedule, encini->cri_key,
- encini->cri_klen);
- rijndaelKeySetupDec(ses->dec_schedule, encini->cri_key,
- encini->cri_klen);
+ switch (csp->csp_cipher_klen * 8) {
+ case 128:
+ ses->rounds = AES128_ROUNDS;
+ break;
+ case 192:
+ ses->rounds = AES192_ROUNDS;
+ break;
+ case 256:
+ ses->rounds = AES256_ROUNDS;
+ break;
+ default:
+ panic("invalid CBC key length");
+ }
+
+ rijndaelKeySetupEnc(ses->enc_schedule, csp->csp_cipher_key,
+ csp->csp_cipher_klen * 8);
+ rijndaelKeySetupDec(ses->dec_schedule, csp->csp_cipher_key,
+ csp->csp_cipher_klen * 8);
for (i = 0; i < nitems(ses->enc_schedule); i++) {
ses->enc_schedule[i] = bswap32(ses->enc_schedule[i]);
ses->dec_schedule[i] = bswap32(ses->dec_schedule[i]);
}
-
- return (0);
}
static int
armv8_crypto_newsession(device_t dev, crypto_session_t cses,
- struct cryptoini *cri)
+ const struct crypto_session_params *csp)
{
struct armv8_crypto_softc *sc;
struct armv8_crypto_session *ses;
- struct cryptoini *encini;
- int error;
-
- if (cri == NULL) {
- CRYPTDEB("no cri");
- return (EINVAL);
- }
sc = device_get_softc(dev);
- if (sc->dieing)
- return (EINVAL);
-
- ses = NULL;
- encini = NULL;
- for (; cri != NULL; cri = cri->cri_next) {
- switch (cri->cri_alg) {
- case CRYPTO_AES_CBC:
- if (encini != NULL) {
- CRYPTDEB("encini already set");
- return (EINVAL);
- }
- encini = cri;
- break;
- default:
- CRYPTDEB("unhandled algorithm");
- return (EINVAL);
- }
- }
- if (encini == NULL) {
- CRYPTDEB("no cipher");
- return (EINVAL);
- }
-
rw_wlock(&sc->lock);
if (sc->dieing) {
rw_wunlock(&sc->lock);
@@ -269,15 +258,7 @@ armv8_crypto_newsession(device_t dev, crypto_session_t cses,
}
ses = crypto_get_driver_session(cses);
- ses->algo = encini->cri_alg;
-
- error = armv8_crypto_cipher_setup(ses, encini);
- if (error != 0) {
- CRYPTDEB("setup failed");
- rw_wunlock(&sc->lock);
- return (error);
- }
-
+ armv8_crypto_cipher_setup(ses, csp);
rw_wunlock(&sc->lock);
return (0);
}
@@ -285,50 +266,17 @@ armv8_crypto_newsession(device_t dev, crypto_session_t cses,
static int
armv8_crypto_process(device_t dev, struct cryptop *crp, int hint __unused)
{
- struct cryptodesc *crd, *enccrd;
struct armv8_crypto_session *ses;
int error;
- error = 0;
- enccrd = NULL;
-
- /* Sanity check. */
- if (crp == NULL)
- return (EINVAL);
-
- if (crp->crp_callback == NULL || crp->crp_desc == NULL) {
- error = EINVAL;
- goto out;
- }
-
- for (crd = crp->crp_desc; crd != NULL; crd = crd->crd_next) {
- switch (crd->crd_alg) {
- case CRYPTO_AES_CBC:
- if (enccrd != NULL) {
- error = EINVAL;
- goto out;
- }
- enccrd = crd;
- break;
- default:
- error = EINVAL;
- goto out;
- }
- }
-
- if (enccrd == NULL) {
- error = EINVAL;
- goto out;
- }
-
/* We can only handle full blocks for now */
- if ((enccrd->crd_len % AES_BLOCK_LEN) != 0) {
+ if ((crp->crp_payload_length % AES_BLOCK_LEN) != 0) {
error = EINVAL;
goto out;
}
ses = crypto_get_driver_session(crp->crp_session);
- error = armv8_crypto_cipher_process(ses, enccrd, crp);
+ error = armv8_crypto_cipher_process(ses, crp);
out:
crp->crp_etype = error;
@@ -337,37 +285,21 @@ out:
}
static uint8_t *
-armv8_crypto_cipher_alloc(struct cryptodesc *enccrd, struct cryptop *crp,
- int *allocated)
+armv8_crypto_cipher_alloc(struct cryptop *crp, int *allocated)
{
- struct mbuf *m;
- struct uio *uio;
- struct iovec *iov;
uint8_t *addr;
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- m = (struct mbuf *)crp->crp_buf;
- if (m->m_next != NULL)
- goto alloc;
- addr = mtod(m, uint8_t *);
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- uio = (struct uio *)crp->crp_buf;
- if (uio->uio_iovcnt != 1)
- goto alloc;
- iov = uio->uio_iov;
- addr = (uint8_t *)iov->iov_base;
- } else
- addr = (uint8_t *)crp->crp_buf;
- *allocated = 0;
- addr += enccrd->crd_skip;
- return (addr);
-
-alloc:
- addr = malloc(enccrd->crd_len, M_ARMV8_CRYPTO, M_NOWAIT);
+ addr = crypto_contiguous_subsegment(crp, crp->crp_payload_start,
+ crp->crp_payload_length);
+ if (addr != NULL) {
+ *allocated = 0;
+ return (addr);
+ }
+ addr = malloc(crp->crp_payload_length, M_ARMV8_CRYPTO, M_NOWAIT);
if (addr != NULL) {
*allocated = 1;
- crypto_copydata(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, addr);
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, addr);
} else
*allocated = 0;
return (addr);
@@ -375,18 +307,20 @@ alloc:
static int
armv8_crypto_cipher_process(struct armv8_crypto_session *ses,
- struct cryptodesc *enccrd, struct cryptop *crp)
+ struct cryptop *crp)
{
+ const struct crypto_session_params *csp;
struct fpu_kern_ctx *ctx;
uint8_t *buf;
uint8_t iv[AES_BLOCK_LEN];
int allocated, i;
- int encflag, ivlen;
+ int encflag;
int kt;
- encflag = (enccrd->crd_flags & CRD_F_ENCRYPT) == CRD_F_ENCRYPT;
+ csp = crypto_get_params(crp->crp_session);
+ encflag = CRYPTO_OP_IS_ENCRYPT(crp->crp_op);
- buf = armv8_crypto_cipher_alloc(enccrd, crp, &allocated);
+ buf = armv8_crypto_cipher_alloc(crp, &allocated);
if (buf == NULL)
return (ENOMEM);
@@ -397,56 +331,41 @@ armv8_crypto_cipher_process(struct armv8_crypto_session *ses,
FPU_KERN_NORMAL | FPU_KERN_KTHR);
}
- if ((enccrd->crd_flags & CRD_F_KEY_EXPLICIT) != 0) {
- panic("CRD_F_KEY_EXPLICIT");
- }
-
- switch (enccrd->crd_alg) {
- case CRYPTO_AES_CBC:
- ivlen = AES_BLOCK_LEN;
- break;
+ if (crp->crp_cipher_key != NULL) {
+ panic("armv8: new cipher key");
}
/* Setup iv */
- if (encflag) {
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, iv, ivlen);
- else
- arc4rand(iv, ivlen, 0);
-
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivlen, iv);
- } else {
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, iv, ivlen);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivlen, iv);
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, csp->csp_ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
/* Do work */
- switch (ses->algo) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_CBC:
if (encflag)
armv8_aes_encrypt_cbc(ses->rounds, ses->enc_schedule,
- enccrd->crd_len, buf, buf, iv);
+ crp->crp_payload_length, buf, buf, iv);
else
armv8_aes_decrypt_cbc(ses->rounds, ses->dec_schedule,
- enccrd->crd_len, buf, iv);
+ crp->crp_payload_length, buf, iv);
break;
}
if (allocated)
- crypto_copyback(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, buf);
+ crypto_copyback(crp, crp->crp_payload_start,
+ crp->crp_payload_length, buf);
if (!kt) {
fpu_kern_leave(curthread, ctx);
RELEASE_CTX(i, ctx);
}
if (allocated) {
- bzero(buf, enccrd->crd_len);
+ bzero(buf, crp->crp_payload_length);
free(buf, M_ARMV8_CRYPTO);
}
return (0);
@@ -458,6 +377,7 @@ static device_method_t armv8_crypto_methods[] = {
DEVMETHOD(device_attach, armv8_crypto_attach),
DEVMETHOD(device_detach, armv8_crypto_detach),
+ DEVMETHOD(cryptodev_probesession, armv8_crypto_probesession),
DEVMETHOD(cryptodev_newsession, armv8_crypto_newsession),
DEVMETHOD(cryptodev_process, armv8_crypto_process),
diff --git a/sys/crypto/blake2/blake2_cryptodev.c b/sys/crypto/blake2/blake2_cryptodev.c
index d5150d04f76f..262823b5a758 100644
--- a/sys/crypto/blake2/blake2_cryptodev.c
+++ b/sys/crypto/blake2/blake2_cryptodev.c
@@ -50,10 +50,7 @@ __FBSDID("$FreeBSD$");
#endif
struct blake2_session {
- int algo;
- size_t klen;
size_t mlen;
- uint8_t key[BLAKE2B_KEYBYTES];
};
CTASSERT((size_t)BLAKE2B_KEYBYTES > (size_t)BLAKE2S_KEYBYTES);
@@ -79,10 +76,8 @@ static struct fpu_kern_ctx **ctx_fpu;
(ctx) = NULL; \
} while (0)
-static int blake2_newsession(device_t, crypto_session_t cses,
- struct cryptoini *cri);
static int blake2_cipher_setup(struct blake2_session *ses,
- struct cryptoini *authini);
+ const struct crypto_session_params *csp);
static int blake2_cipher_process(struct blake2_session *ses,
struct cryptop *crp);
@@ -134,7 +129,7 @@ blake2_attach(device_t dev)
sc->dying = false;
sc->cid = crypto_get_driverid(dev, sizeof(struct blake2_session),
- CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SYNC);
+ CRYPTOCAP_F_SOFTWARE | CRYPTOCAP_F_SYNC);
if (sc->cid < 0) {
device_printf(dev, "Could not get crypto driver id.\n");
return (ENOMEM);
@@ -152,8 +147,6 @@ blake2_attach(device_t dev)
rw_init(&sc->lock, "blake2_lock");
- crypto_register(sc->cid, CRYPTO_BLAKE2B, 0, 0);
- crypto_register(sc->cid, CRYPTO_BLAKE2S, 0, 0);
return (0);
}
@@ -177,52 +170,47 @@ blake2_detach(device_t dev)
}
static int
-blake2_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+blake2_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct blake2_softc *sc;
- struct blake2_session *ses;
- struct cryptoini *authini;
- int error;
- if (cri == NULL) {
- CRYPTDEB("no cri");
+ if (csp->csp_flags != 0)
return (EINVAL);
- }
-
- sc = device_get_softc(dev);
-
- authini = NULL;
- for (; cri != NULL; cri = cri->cri_next) {
- switch (cri->cri_alg) {
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ switch (csp->csp_auth_alg) {
case CRYPTO_BLAKE2B:
case CRYPTO_BLAKE2S:
- if (authini != NULL) {
- CRYPTDEB("authini already set");
- return (EINVAL);
- }
- authini = cri;
break;
default:
- CRYPTDEB("unhandled algorithm");
return (EINVAL);
}
- }
- if (authini == NULL) {
- CRYPTDEB("no cipher");
+ break;
+ default:
return (EINVAL);
}
+ return (CRYPTODEV_PROBE_ACCEL_SOFTWARE);
+}
- rw_wlock(&sc->lock);
+static int
+blake2_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct blake2_softc *sc;
+ struct blake2_session *ses;
+ int error;
+
+ sc = device_get_softc(dev);
+
+ ses = crypto_get_driver_session(cses);
+
+ rw_rlock(&sc->lock);
if (sc->dying) {
- rw_wunlock(&sc->lock);
+ rw_runlock(&sc->lock);
return (EINVAL);
}
- rw_wunlock(&sc->lock);
-
- ses = crypto_get_driver_session(cses);
+ rw_runlock(&sc->lock);
- ses->algo = authini->cri_alg;
- error = blake2_cipher_setup(ses, authini);
+ error = blake2_cipher_setup(ses, csp);
if (error != 0) {
CRYPTDEB("setup failed");
return (error);
@@ -235,48 +223,14 @@ static int
blake2_process(device_t dev, struct cryptop *crp, int hint __unused)
{
struct blake2_session *ses;
- struct cryptodesc *crd, *authcrd;
int error;
- ses = NULL;
- error = 0;
- authcrd = NULL;
-
- /* Sanity check. */
- if (crp == NULL)
- return (EINVAL);
-
- if (crp->crp_callback == NULL || crp->crp_desc == NULL) {
- error = EINVAL;
- goto out;
- }
-
- for (crd = crp->crp_desc; crd != NULL; crd = crd->crd_next) {
- switch (crd->crd_alg) {
- case CRYPTO_BLAKE2B:
- case CRYPTO_BLAKE2S:
- if (authcrd != NULL) {
- error = EINVAL;
- goto out;
- }
- authcrd = crd;
- break;
-
- default:
- error = EINVAL;
- goto out;
- }
- }
-
ses = crypto_get_driver_session(crp->crp_session);
error = blake2_cipher_process(ses, crp);
- if (error != 0)
- goto out;
-out:
crp->crp_etype = error;
crypto_done(crp);
- return (error);
+ return (0);
}
static device_method_t blake2_methods[] = {
@@ -285,6 +239,7 @@ static device_method_t blake2_methods[] = {
DEVMETHOD(device_attach, blake2_attach),
DEVMETHOD(device_detach, blake2_detach),
+ DEVMETHOD(cryptodev_probesession, blake2_probesession),
DEVMETHOD(cryptodev_newsession, blake2_newsession),
DEVMETHOD(cryptodev_process, blake2_process),
@@ -302,37 +257,48 @@ DRIVER_MODULE(blake2, nexus, blake2_driver, blake2_devclass, 0, 0);
MODULE_VERSION(blake2, 1);
MODULE_DEPEND(blake2, crypto, 1, 1, 1);
+static bool
+blake2_check_klen(const struct crypto_session_params *csp, unsigned klen)
+{
+
+ if (csp->csp_auth_alg == CRYPTO_BLAKE2S)
+ return (klen <= BLAKE2S_KEYBYTES);
+ else
+ return (klen <= BLAKE2B_KEYBYTES);
+}
+
static int
-blake2_cipher_setup(struct blake2_session *ses, struct cryptoini *authini)
+blake2_cipher_setup(struct blake2_session *ses,
+ const struct crypto_session_params *csp)
{
- int keylen;
+ int hashlen;
CTASSERT((size_t)BLAKE2S_OUTBYTES <= (size_t)BLAKE2B_OUTBYTES);
- if (authini->cri_mlen < 0)
+ if (!blake2_check_klen(csp, csp->csp_auth_klen))
+ return (EINVAL);
+
+ if (csp->csp_auth_mlen < 0)
return (EINVAL);
- switch (ses->algo) {
+ switch (csp->csp_auth_alg) {
case CRYPTO_BLAKE2S:
- if (authini->cri_mlen != 0 &&
- authini->cri_mlen > BLAKE2S_OUTBYTES)
- return (EINVAL);
- /* FALLTHROUGH */
+ hashlen = BLAKE2S_OUTBYTES;
+ break;
case CRYPTO_BLAKE2B:
- if (authini->cri_mlen != 0 &&
- authini->cri_mlen > BLAKE2B_OUTBYTES)
- return (EINVAL);
-
- if (authini->cri_klen % 8 != 0)
- return (EINVAL);
- keylen = authini->cri_klen / 8;
- if (keylen > sizeof(ses->key) ||
- (ses->algo == CRYPTO_BLAKE2S && keylen > BLAKE2S_KEYBYTES))
- return (EINVAL);
- ses->klen = keylen;
- memcpy(ses->key, authini->cri_key, keylen);
- ses->mlen = authini->cri_mlen;
+ hashlen = BLAKE2B_OUTBYTES;
+ break;
+ default:
+ return (EINVAL);
}
+
+ if (csp->csp_auth_mlen > hashlen)
+ return (EINVAL);
+
+ if (csp->csp_auth_mlen == 0)
+ ses->mlen = hashlen;
+ else
+ ses->mlen = csp->csp_auth_mlen;
return (0);
}
@@ -365,15 +331,15 @@ blake2_cipher_process(struct blake2_session *ses, struct cryptop *crp)
blake2b_state sb;
blake2s_state ss;
} bctx;
- char res[BLAKE2B_OUTBYTES];
+ char res[BLAKE2B_OUTBYTES], res2[BLAKE2B_OUTBYTES];
+ const struct crypto_session_params *csp;
struct fpu_kern_ctx *ctx;
+ const void *key;
int ctxidx;
bool kt;
- struct cryptodesc *crd;
int error, rc;
- size_t hashlen;
+ unsigned klen;
- crd = crp->crp_desc;
ctx = NULL;
ctxidx = 0;
error = EINVAL;
@@ -385,47 +351,42 @@ blake2_cipher_process(struct blake2_session *ses, struct cryptop *crp)
FPU_KERN_NORMAL | FPU_KERN_KTHR);
}
- if (crd->crd_flags != 0)
- goto out;
-
- switch (ses->algo) {
+ csp = crypto_get_params(crp->crp_session);
+ if (crp->crp_auth_key != NULL)
+ key = crp->crp_auth_key;
+ else
+ key = csp->csp_auth_key;
+ klen = csp->csp_auth_klen;
+ switch (csp->csp_auth_alg) {
case CRYPTO_BLAKE2B:
- if (ses->mlen != 0)
- hashlen = ses->mlen;
+ if (klen > 0)
+ rc = blake2b_init_key(&bctx.sb, ses->mlen, key, klen);
else
- hashlen = BLAKE2B_OUTBYTES;
- if (ses->klen > 0)
- rc = blake2b_init_key(&bctx.sb, hashlen, ses->key, ses->klen);
- else
- rc = blake2b_init(&bctx.sb, hashlen);
+ rc = blake2b_init(&bctx.sb, ses->mlen);
if (rc != 0)
goto out;
- error = crypto_apply(crp->crp_flags, crp->crp_buf, crd->crd_skip,
- crd->crd_len, blake2b_applicator, &bctx.sb);
+ error = crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length, blake2b_applicator, &bctx.sb);
if (error != 0)
goto out;
- rc = blake2b_final(&bctx.sb, res, hashlen);
+ rc = blake2b_final(&bctx.sb, res, ses->mlen);
if (rc != 0) {
error = EINVAL;
goto out;
}
break;
case CRYPTO_BLAKE2S:
- if (ses->mlen != 0)
- hashlen = ses->mlen;
- else
- hashlen = BLAKE2S_OUTBYTES;
- if (ses->klen > 0)
- rc = blake2s_init_key(&bctx.ss, hashlen, ses->key, ses->klen);
+ if (klen > 0)
+ rc = blake2s_init_key(&bctx.ss, ses->mlen, key, klen);
else
- rc = blake2s_init(&bctx.ss, hashlen);
+ rc = blake2s_init(&bctx.ss, ses->mlen);
if (rc != 0)
goto out;
- error = crypto_apply(crp->crp_flags, crp->crp_buf, crd->crd_skip,
- crd->crd_len, blake2s_applicator, &bctx.ss);
+ error = crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length, blake2s_applicator, &bctx.ss);
if (error != 0)
goto out;
- rc = blake2s_final(&bctx.ss, res, hashlen);
+ rc = blake2s_final(&bctx.ss, res, ses->mlen);
if (rc != 0) {
error = EINVAL;
goto out;
@@ -435,8 +396,12 @@ blake2_cipher_process(struct blake2_session *ses, struct cryptop *crp)
panic("unreachable");
}
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject, hashlen,
- (void *)res);
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, ses->mlen, res2);
+ if (timingsafe_bcmp(res, res2, ses->mlen) != 0)
+ return (EBADMSG);
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, ses->mlen, res);
out:
if (!kt) {
diff --git a/sys/crypto/ccp/ccp.c b/sys/crypto/ccp/ccp.c
index 27b623b0697c..b38315e35ba8 100644
--- a/sys/crypto/ccp/ccp.c
+++ b/sys/crypto/ccp/ccp.c
@@ -96,22 +96,28 @@ ccp_populate_sglist(struct sglist *sg, struct cryptop *crp)
int error;
sglist_reset(sg);
- if (crp->crp_flags & CRYPTO_F_IMBUF)
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
error = sglist_append_mbuf(sg, crp->crp_mbuf);
- else if (crp->crp_flags & CRYPTO_F_IOV)
+ break;
+ case CRYPTO_BUF_UIO:
error = sglist_append_uio(sg, crp->crp_uio);
- else
+ break;
+ case CRYPTO_BUF_CONTIG:
error = sglist_append(sg, crp->crp_buf, crp->crp_ilen);
+ break;
+ default:
+ error = EINVAL;
+ }
return (error);
}
/*
* Handle a GCM request with an empty payload by performing the
- * operation in software. Derived from swcr_authenc().
+ * operation in software.
*/
static void
-ccp_gcm_soft(struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccp_gcm_soft(struct ccp_session *s, struct cryptop *crp)
{
struct aes_gmac_ctx gmac_ctx;
char block[GMAC_BLOCK_LEN];
@@ -123,21 +129,11 @@ ccp_gcm_soft(struct ccp_session *s, struct cryptop *crp,
* This assumes a 12-byte IV from the crp. See longer comment
* above in ccp_gcm() for more details.
*/
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, 12);
- else
- arc4rand(iv, 12, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, 12, iv);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, 12);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, 12, iv);
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0) {
+ crp->crp_etype = EINVAL;
+ goto out;
}
+ memcpy(iv, crp->crp_iv, 12);
*(uint32_t *)&iv[12] = htobe32(1);
/* Initialize the MAC. */
@@ -146,34 +142,34 @@ ccp_gcm_soft(struct ccp_session *s, struct cryptop *crp,
AES_GMAC_Reinit(&gmac_ctx, iv, sizeof(iv));
/* MAC the AAD. */
- for (i = 0; i < crda->crd_len; i += sizeof(block)) {
- len = imin(crda->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_skip +
- i, len, block);
+ for (i = 0; i < crp->crp_aad_length; i += sizeof(block)) {
+ len = imin(crp->crp_aad_length - i, sizeof(block));
+ crypto_copydata(crp, crp->crp_aad_start + i, len, block);
bzero(block + len, sizeof(block) - len);
AES_GMAC_Update(&gmac_ctx, block, sizeof(block));
}
/* Length block. */
bzero(block, sizeof(block));
- ((uint32_t *)block)[1] = htobe32(crda->crd_len * 8);
+ ((uint32_t *)block)[1] = htobe32(crp->crp_aad_length * 8);
AES_GMAC_Update(&gmac_ctx, block, sizeof(block));
AES_GMAC_Final(digest, &gmac_ctx);
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest), digest);
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ crypto_copyback(crp, crp->crp_digest_start, sizeof(digest),
+ digest);
crp->crp_etype = 0;
} else {
char digest2[GMAC_DIGEST_LEN];
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest2), digest2);
+ crypto_copydata(crp, crp->crp_digest_start, sizeof(digest2),
+ digest2);
if (timingsafe_bcmp(digest, digest2, sizeof(digest)) == 0)
crp->crp_etype = 0;
else
crp->crp_etype = EBADMSG;
}
+out:
crypto_done(crp);
}
@@ -259,22 +255,6 @@ ccp_attach(device_t dev)
random_source_register(&random_ccp);
}
- if ((sc->hw_features & VERSION_CAP_AES) != 0) {
- crypto_register(sc->cid, CRYPTO_AES_CBC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_ICM, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_NIST_GCM_16, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_128_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_192_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_256_NIST_GMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_AES_XTS, 0, 0);
- }
- if ((sc->hw_features & VERSION_CAP_SHA) != 0) {
- crypto_register(sc->cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_256_HMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_384_HMAC, 0, 0);
- crypto_register(sc->cid, CRYPTO_SHA2_512_HMAC, 0, 0);
- }
-
return (0);
}
@@ -304,8 +284,7 @@ ccp_detach(device_t dev)
}
static void
-ccp_init_hmac_digest(struct ccp_session *s, int cri_alg, char *key,
- int klen)
+ccp_init_hmac_digest(struct ccp_session *s, const char *key, int klen)
{
union authctx auth_ctx;
struct auth_hash *axf;
@@ -316,7 +295,6 @@ ccp_init_hmac_digest(struct ccp_session *s, int cri_alg, char *key,
* the key as the key instead.
*/
axf = s->hmac.auth_hash;
- klen /= 8;
if (klen > axf->blocksize) {
axf->Init(&auth_ctx);
axf->Update(&auth_ctx, key, klen);
@@ -335,26 +313,26 @@ ccp_init_hmac_digest(struct ccp_session *s, int cri_alg, char *key,
}
}
-static int
+static bool
ccp_aes_check_keylen(int alg, int klen)
{
- switch (klen) {
+ switch (klen * 8) {
case 128:
case 192:
if (alg == CRYPTO_AES_XTS)
- return (EINVAL);
+ return (false);
break;
case 256:
break;
case 512:
if (alg != CRYPTO_AES_XTS)
- return (EINVAL);
+ return (false);
break;
default:
- return (EINVAL);
+ return (false);
}
- return (0);
+ return (true);
}
static void
@@ -363,9 +341,9 @@ ccp_aes_setkey(struct ccp_session *s, int alg, const void *key, int klen)
unsigned kbits;
if (alg == CRYPTO_AES_XTS)
- kbits = klen / 2;
+ kbits = (klen / 2) * 8;
else
- kbits = klen;
+ kbits = klen * 8;
switch (kbits) {
case 128:
@@ -381,123 +359,154 @@ ccp_aes_setkey(struct ccp_session *s, int alg, const void *key, int klen)
panic("should not get here");
}
- s->blkcipher.key_len = klen / 8;
+ s->blkcipher.key_len = klen;
memcpy(s->blkcipher.enckey, key, s->blkcipher.key_len);
}
+static bool
+ccp_auth_supported(struct ccp_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ if ((sc->hw_features & VERSION_CAP_SHA) == 0)
+ return (false);
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1_HMAC:
+ case CRYPTO_SHA2_256_HMAC:
+ case CRYPTO_SHA2_384_HMAC:
+ case CRYPTO_SHA2_512_HMAC:
+ if (csp->csp_auth_key == NULL)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+ return (true);
+}
+
+static bool
+ccp_cipher_supported(struct ccp_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ if ((sc->hw_features & VERSION_CAP_AES) == 0)
+ return (false);
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_AES_ICM:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_AES_XTS:
+ if (csp->csp_ivlen != AES_XTS_IV_LEN)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+ return (ccp_aes_check_keylen(csp->csp_cipher_alg,
+ csp->csp_cipher_klen));
+}
+
static int
-ccp_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+ccp_probesession(device_t dev, const struct crypto_session_params *csp)
{
struct ccp_softc *sc;
- struct ccp_session *s;
- struct auth_hash *auth_hash;
- struct cryptoini *c, *hash, *cipher;
- enum ccp_aes_mode cipher_mode;
- unsigned auth_mode, iv_len;
- unsigned partial_digest_len;
- unsigned q;
- int error;
- bool gcm_hash;
- if (cri == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
-
- s = crypto_get_driver_session(cses);
-
- gcm_hash = false;
- cipher = NULL;
- hash = NULL;
- auth_hash = NULL;
- /* XXX reconcile auth_mode with use by ccp_sha */
- auth_mode = 0;
- cipher_mode = CCP_AES_MODE_ECB;
- iv_len = 0;
- partial_digest_len = 0;
- for (c = cri; c != NULL; c = c->cri_next) {
- switch (c->cri_alg) {
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- if (hash)
- return (EINVAL);
- hash = c;
- switch (c->cri_alg) {
- case CRYPTO_SHA1_HMAC:
- auth_hash = &auth_hash_hmac_sha1;
- auth_mode = SHA1;
- partial_digest_len = SHA1_HASH_LEN;
- break;
- case CRYPTO_SHA2_256_HMAC:
- auth_hash = &auth_hash_hmac_sha2_256;
- auth_mode = SHA2_256;
- partial_digest_len = SHA2_256_HASH_LEN;
- break;
- case CRYPTO_SHA2_384_HMAC:
- auth_hash = &auth_hash_hmac_sha2_384;
- auth_mode = SHA2_384;
- partial_digest_len = SHA2_512_HASH_LEN;
- break;
- case CRYPTO_SHA2_512_HMAC:
- auth_hash = &auth_hash_hmac_sha2_512;
- auth_mode = SHA2_512;
- partial_digest_len = SHA2_512_HASH_LEN;
- break;
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- gcm_hash = true;
-#if 0
- auth_mode = CHCR_SCMD_AUTH_MODE_GHASH;
-#endif
- break;
- }
- break;
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
+ sc = device_get_softc(dev);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!ccp_auth_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!ccp_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_AEAD:
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_NIST_GCM_16:
- case CRYPTO_AES_XTS:
- if (cipher)
+ if (csp->csp_ivlen != AES_GCM_IV_LEN)
+ return (EINVAL);
+ if (csp->csp_auth_mlen < 0 ||
+ csp->csp_auth_mlen > AES_GMAC_HASH_LEN)
+ return (EINVAL);
+ if ((sc->hw_features & VERSION_CAP_AES) == 0)
return (EINVAL);
- cipher = c;
- switch (c->cri_alg) {
- case CRYPTO_AES_CBC:
- cipher_mode = CCP_AES_MODE_CBC;
- iv_len = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_ICM:
- cipher_mode = CCP_AES_MODE_CTR;
- iv_len = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_NIST_GCM_16:
- cipher_mode = CCP_AES_MODE_GCTR;
- iv_len = AES_GCM_IV_LEN;
- break;
- case CRYPTO_AES_XTS:
- cipher_mode = CCP_AES_MODE_XTS;
- iv_len = AES_BLOCK_LEN;
- break;
- }
- if (c->cri_key != NULL) {
- error = ccp_aes_check_keylen(c->cri_alg,
- c->cri_klen);
- if (error != 0)
- return (error);
- }
break;
default:
return (EINVAL);
}
- }
- if (gcm_hash != (cipher_mode == CCP_AES_MODE_GCTR))
- return (EINVAL);
- if (hash == NULL && cipher == NULL)
- return (EINVAL);
- if (hash != NULL && hash->cri_key == NULL)
+ break;
+ case CSP_MODE_ETA:
+ if (!ccp_auth_supported(sc, csp) ||
+ !ccp_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
+ }
+
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+static int
+ccp_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct ccp_softc *sc;
+ struct ccp_session *s;
+ struct auth_hash *auth_hash;
+ enum ccp_aes_mode cipher_mode;
+ unsigned auth_mode;
+ unsigned q;
+
+ /* XXX reconcile auth_mode with use by ccp_sha */
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1_HMAC:
+ auth_hash = &auth_hash_hmac_sha1;
+ auth_mode = SHA1;
+ break;
+ case CRYPTO_SHA2_256_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_256;
+ auth_mode = SHA2_256;
+ break;
+ case CRYPTO_SHA2_384_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_384;
+ auth_mode = SHA2_384;
+ break;
+ case CRYPTO_SHA2_512_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_512;
+ auth_mode = SHA2_512;
+ break;
+ default:
+ auth_hash = NULL;
+ auth_mode = 0;
+ break;
+ }
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ cipher_mode = CCP_AES_MODE_CBC;
+ break;
+ case CRYPTO_AES_ICM:
+ cipher_mode = CCP_AES_MODE_CTR;
+ break;
+ case CRYPTO_AES_NIST_GCM_16:
+ cipher_mode = CCP_AES_MODE_GCTR;
+ break;
+ case CRYPTO_AES_XTS:
+ cipher_mode = CCP_AES_MODE_XTS;
+ break;
+ default:
+ cipher_mode = CCP_AES_MODE_ECB;
+ break;
+ }
sc = device_get_softc(dev);
mtx_lock(&sc->lock);
@@ -506,6 +515,8 @@ ccp_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
return (ENXIO);
}
+ s = crypto_get_driver_session(cses);
+
/* Just grab the first usable queue for now. */
for (q = 0; q < nitems(sc->queues); q++)
if ((sc->valid_queues & (1 << q)) != 0)
@@ -516,38 +527,40 @@ ccp_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
}
s->queue = q;
- if (gcm_hash)
+ switch (csp->csp_mode) {
+ case CSP_MODE_AEAD:
s->mode = GCM;
- else if (hash != NULL && cipher != NULL)
+ break;
+ case CSP_MODE_ETA:
s->mode = AUTHENC;
- else if (hash != NULL)
+ break;
+ case CSP_MODE_DIGEST:
s->mode = HMAC;
- else {
- MPASS(cipher != NULL);
+ break;
+ case CSP_MODE_CIPHER:
s->mode = BLKCIPHER;
+ break;
}
- if (gcm_hash) {
- if (hash->cri_mlen == 0)
+
+ if (s->mode == GCM) {
+ if (csp->csp_auth_mlen == 0)
s->gmac.hash_len = AES_GMAC_HASH_LEN;
else
- s->gmac.hash_len = hash->cri_mlen;
- } else if (hash != NULL) {
+ s->gmac.hash_len = csp->csp_auth_mlen;
+ } else if (auth_hash != NULL) {
s->hmac.auth_hash = auth_hash;
s->hmac.auth_mode = auth_mode;
- s->hmac.partial_digest_len = partial_digest_len;
- if (hash->cri_mlen == 0)
+ if (csp->csp_auth_mlen == 0)
s->hmac.hash_len = auth_hash->hashsize;
else
- s->hmac.hash_len = hash->cri_mlen;
- ccp_init_hmac_digest(s, hash->cri_alg, hash->cri_key,
- hash->cri_klen);
+ s->hmac.hash_len = csp->csp_auth_mlen;
+ ccp_init_hmac_digest(s, csp->csp_auth_key, csp->csp_auth_klen);
}
- if (cipher != NULL) {
+ if (cipher_mode != CCP_AES_MODE_ECB) {
s->blkcipher.cipher_mode = cipher_mode;
- s->blkcipher.iv_len = iv_len;
- if (cipher->cri_key != NULL)
- ccp_aes_setkey(s, cipher->cri_alg, cipher->cri_key,
- cipher->cri_klen);
+ if (csp->csp_cipher_key != NULL)
+ ccp_aes_setkey(s, csp->csp_cipher_alg,
+ csp->csp_cipher_key, csp->csp_cipher_klen);
}
s->active = true;
@@ -573,19 +586,17 @@ ccp_freesession(device_t dev, crypto_session_t cses)
static int
ccp_process(device_t dev, struct cryptop *crp, int hint)
{
+ const struct crypto_session_params *csp;
struct ccp_softc *sc;
struct ccp_queue *qp;
struct ccp_session *s;
- struct cryptodesc *crd, *crda, *crde;
int error;
bool qpheld;
qpheld = false;
qp = NULL;
- if (crp == NULL)
- return (EINVAL);
- crd = crp->crp_desc;
+ csp = crypto_get_params(crp->crp_session);
s = crypto_get_driver_session(crp->crp_session);
sc = device_get_softc(dev);
mtx_lock(&sc->lock);
@@ -600,89 +611,47 @@ ccp_process(device_t dev, struct cryptop *crp, int hint)
if (error != 0)
goto out;
+ if (crp->crp_auth_key != NULL) {
+ KASSERT(s->hmac.auth_hash != NULL, ("auth key without HMAC"));
+ ccp_init_hmac_digest(s, crp->crp_auth_key, csp->csp_auth_klen);
+ }
+ if (crp->crp_cipher_key != NULL)
+ ccp_aes_setkey(s, csp->csp_cipher_alg, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
+
switch (s->mode) {
case HMAC:
- if (crd->crd_flags & CRD_F_KEY_EXPLICIT)
- ccp_init_hmac_digest(s, crd->crd_alg, crd->crd_key,
- crd->crd_klen);
+ if (s->pending != 0) {
+ error = EAGAIN;
+ break;
+ }
error = ccp_hmac(qp, s, crp);
break;
case BLKCIPHER:
- if (crd->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccp_aes_check_keylen(crd->crd_alg,
- crd->crd_klen);
- if (error != 0)
- break;
- ccp_aes_setkey(s, crd->crd_alg, crd->crd_key,
- crd->crd_klen);
+ if (s->pending != 0) {
+ error = EAGAIN;
+ break;
}
error = ccp_blkcipher(qp, s, crp);
break;
case AUTHENC:
- error = 0;
- switch (crd->crd_alg) {
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- case CRYPTO_AES_XTS:
- /* Only encrypt-then-authenticate supported. */
- crde = crd;
- crda = crd->crd_next;
- if (!(crde->crd_flags & CRD_F_ENCRYPT)) {
- error = EINVAL;
- break;
- }
- s->cipher_first = true;
- break;
- default:
- crda = crd;
- crde = crd->crd_next;
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- error = EINVAL;
- break;
- }
- s->cipher_first = false;
+ if (s->pending != 0) {
+ error = EAGAIN;
break;
}
- if (error != 0)
- break;
- if (crda->crd_flags & CRD_F_KEY_EXPLICIT)
- ccp_init_hmac_digest(s, crda->crd_alg, crda->crd_key,
- crda->crd_klen);
- if (crde->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccp_aes_check_keylen(crde->crd_alg,
- crde->crd_klen);
- if (error != 0)
- break;
- ccp_aes_setkey(s, crde->crd_alg, crde->crd_key,
- crde->crd_klen);
- }
- error = ccp_authenc(qp, s, crp, crda, crde);
+ error = ccp_authenc(qp, s, crp);
break;
case GCM:
- error = 0;
- if (crd->crd_alg == CRYPTO_AES_NIST_GCM_16) {
- crde = crd;
- crda = crd->crd_next;
- s->cipher_first = true;
- } else {
- crda = crd;
- crde = crd->crd_next;
- s->cipher_first = false;
- }
- if (crde->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccp_aes_check_keylen(crde->crd_alg,
- crde->crd_klen);
- if (error != 0)
- break;
- ccp_aes_setkey(s, crde->crd_alg, crde->crd_key,
- crde->crd_klen);
- }
- if (crde->crd_len == 0) {
+ if (crp->crp_payload_length == 0) {
mtx_unlock(&qp->cq_lock);
- ccp_gcm_soft(s, crp, crda, crde);
+ ccp_gcm_soft(s, crp);
return (0);
}
- error = ccp_gcm(qp, s, crp, crda, crde);
+ if (s->pending != 0) {
+ error = EAGAIN;
+ break;
+ }
+ error = ccp_gcm(qp, s, crp);
break;
}
@@ -716,6 +685,7 @@ static device_method_t ccp_methods[] = {
DEVMETHOD(device_attach, ccp_attach),
DEVMETHOD(device_detach, ccp_detach),
+ DEVMETHOD(cryptodev_probesession, ccp_probesession),
DEVMETHOD(cryptodev_newsession, ccp_newsession),
DEVMETHOD(cryptodev_freesession, ccp_freesession),
DEVMETHOD(cryptodev_process, ccp_process),
diff --git a/sys/crypto/ccp/ccp.h b/sys/crypto/ccp/ccp.h
index e622e475f0a8..197cbc6b4c36 100644
--- a/sys/crypto/ccp/ccp.h
+++ b/sys/crypto/ccp/ccp.h
@@ -58,14 +58,18 @@ enum sha_version {
SHA2_256, SHA2_384, SHA2_512
};
+/*
+ * XXX: The hmac.res, gmac.final_block, and blkcipher.iv fields are
+ * used by individual requests meaning that sessions cannot have more
+ * than a single request in flight at a time.
+ */
struct ccp_session_hmac {
struct auth_hash *auth_hash;
int hash_len;
- unsigned int partial_digest_len;
unsigned int auth_mode;
- unsigned int mk_size;
char ipad[CCP_HASH_MAX_BLOCK_SIZE];
char opad[CCP_HASH_MAX_BLOCK_SIZE];
+ char res[CCP_HASH_MAX_BLOCK_SIZE];
};
struct ccp_session_gmac {
@@ -77,14 +81,12 @@ struct ccp_session_blkcipher {
unsigned cipher_mode;
unsigned cipher_type;
unsigned key_len;
- unsigned iv_len;
char enckey[CCP_AES_MAX_KEY_LEN];
char iv[CCP_MAX_CRYPTO_IV_LEN];
};
struct ccp_session {
- bool active : 1;
- bool cipher_first : 1;
+ bool active;
int pending;
enum { HMAC, BLKCIPHER, AUTHENC, GCM } mode;
unsigned queue;
@@ -217,12 +219,11 @@ void db_ccp_show_queue_hw(struct ccp_queue *qp);
* Internal hardware crypt-op submission routines.
*/
int ccp_authenc(struct ccp_queue *sc, struct ccp_session *s,
- struct cryptop *crp, struct cryptodesc *crda, struct cryptodesc *crde)
- __must_check;
+ struct cryptop *crp) __must_check;
int ccp_blkcipher(struct ccp_queue *sc, struct ccp_session *s,
struct cryptop *crp) __must_check;
-int ccp_gcm(struct ccp_queue *sc, struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde) __must_check;
+int ccp_gcm(struct ccp_queue *sc, struct ccp_session *s, struct cryptop *crp)
+ __must_check;
int ccp_hmac(struct ccp_queue *sc, struct ccp_session *s, struct cryptop *crp)
__must_check;
diff --git a/sys/crypto/ccp/ccp_hardware.c b/sys/crypto/ccp/ccp_hardware.c
index 50ff7a7a155d..113c3ca74890 100644
--- a/sys/crypto/ccp/ccp_hardware.c
+++ b/sys/crypto/ccp/ccp_hardware.c
@@ -895,7 +895,7 @@ ccp_passthrough_sgl(struct ccp_queue *qp, bus_addr_t lsb_addr, bool tolsb,
remain = len;
for (i = 0; i < sgl->sg_nseg && remain != 0; i++) {
seg = &sgl->sg_segs[i];
- /* crd_len is int, so 32-bit min() is ok. */
+ /* crp lengths are int, so 32-bit min() is ok. */
nb = min(remain, seg->ss_len);
if (tolsb)
@@ -1116,7 +1116,7 @@ ccp_sha(struct ccp_queue *qp, enum sha_version version, struct sglist *sgl_src,
lsbaddr = ccp_queue_lsb_address(qp, LSB_ENTRY_SHA);
for (i = 0; i < sgl_dst->sg_nseg; i++) {
seg = &sgl_dst->sg_segs[i];
- /* crd_len is int, so 32-bit min() is ok. */
+ /* crp lengths are int, so 32-bit min() is ok. */
nb = min(remaining, seg->ss_len);
error = ccp_passthrough(qp, seg->ss_paddr, CCP_MEMTYPE_SYSTEM,
@@ -1202,7 +1202,7 @@ ccp_sha_copy_result(char *output, char *buffer, enum sha_version version)
static void
ccp_do_hmac_done(struct ccp_queue *qp, struct ccp_session *s,
- struct cryptop *crp, struct cryptodesc *crd, int error)
+ struct cryptop *crp, int error)
{
char ihash[SHA2_512_HASH_LEN /* max hash len */];
union authctx auth_ctx;
@@ -1220,21 +1220,26 @@ ccp_do_hmac_done(struct ccp_queue *qp, struct ccp_session *s,
/* Do remaining outer hash over small inner hash in software */
axf->Init(&auth_ctx);
axf->Update(&auth_ctx, s->hmac.opad, axf->blocksize);
- ccp_sha_copy_result(ihash, s->hmac.ipad, s->hmac.auth_mode);
+ ccp_sha_copy_result(ihash, s->hmac.res, s->hmac.auth_mode);
#if 0
INSECURE_DEBUG(dev, "%s sha intermediate=%64D\n", __func__,
(u_char *)ihash, " ");
#endif
axf->Update(&auth_ctx, ihash, axf->hashsize);
- axf->Final(s->hmac.ipad, &auth_ctx);
+ axf->Final(s->hmac.res, &auth_ctx);
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject,
- s->hmac.hash_len, s->hmac.ipad);
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, s->hmac.hash_len,
+ ihash);
+ if (timingsafe_bcmp(s->hmac.res, ihash, s->hmac.hash_len) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, s->hmac.hash_len,
+ s->hmac.res);
/* Avoid leaking key material */
explicit_bzero(&auth_ctx, sizeof(auth_ctx));
- explicit_bzero(s->hmac.ipad, sizeof(s->hmac.ipad));
- explicit_bzero(s->hmac.opad, sizeof(s->hmac.opad));
+ explicit_bzero(s->hmac.res, sizeof(s->hmac.res));
out:
crypto_done(crp);
@@ -1244,17 +1249,15 @@ static void
ccp_hmac_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
int error)
{
- struct cryptodesc *crd;
struct cryptop *crp;
crp = vcrp;
- crd = crp->crp_desc;
- ccp_do_hmac_done(qp, s, crp, crd, error);
+ ccp_do_hmac_done(qp, s, crp, error);
}
static int __must_check
ccp_do_hmac(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crd, const struct ccp_completion_ctx *cctx)
+ const struct ccp_completion_ctx *cctx)
{
device_t dev;
struct auth_hash *axf;
@@ -1272,15 +1275,21 @@ ccp_do_hmac(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
error = sglist_append(qp->cq_sg_ulptx, s->hmac.ipad, axf->blocksize);
if (error != 0)
return (error);
+ if (crp->crp_aad_length != 0) {
+ error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
+ crp->crp_aad_start, crp->crp_aad_length);
+ if (error != 0)
+ return (error);
+ }
error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
- crd->crd_skip, crd->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error != 0) {
DPRINTF(dev, "%s: sglist too short\n", __func__);
return (error);
}
- /* Populate SGL for output -- just reuse hmac.ipad buffer. */
+ /* Populate SGL for output -- use hmac.res buffer. */
sglist_reset(qp->cq_sg_dst);
- error = sglist_append(qp->cq_sg_dst, s->hmac.ipad,
+ error = sglist_append(qp->cq_sg_dst, s->hmac.res,
roundup2(axf->hashsize, LSB_ENTRY_SIZE));
if (error != 0)
return (error);
@@ -1298,15 +1307,12 @@ int __must_check
ccp_hmac(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp)
{
struct ccp_completion_ctx ctx;
- struct cryptodesc *crd;
-
- crd = crp->crp_desc;
ctx.callback_fn = ccp_hmac_done;
ctx.callback_arg = crp;
ctx.session = s;
- return (ccp_do_hmac(qp, s, crp, crd, &ctx));
+ return (ccp_do_hmac(qp, s, crp, &ctx));
}
static void
@@ -1329,7 +1335,7 @@ ccp_blkcipher_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
{
struct cryptop *crp;
- explicit_bzero(&s->blkcipher, sizeof(s->blkcipher));
+ explicit_bzero(&s->blkcipher.iv, sizeof(s->blkcipher.iv));
crp = vcrp;
@@ -1343,57 +1349,39 @@ ccp_blkcipher_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
}
static void
-ccp_collect_iv(struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crd)
-{
-
- if (crd->crd_flags & CRD_F_ENCRYPT) {
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(s->blkcipher.iv, crd->crd_iv,
- s->blkcipher.iv_len);
- else
- arc4rand(s->blkcipher.iv, s->blkcipher.iv_len, 0);
- if ((crd->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, s->blkcipher.iv_len,
- s->blkcipher.iv);
- } else {
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(s->blkcipher.iv, crd->crd_iv,
- s->blkcipher.iv_len);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, s->blkcipher.iv_len,
- s->blkcipher.iv);
- }
+ccp_collect_iv(struct cryptop *crp, const struct crypto_session_params *csp,
+ char *iv)
+{
+
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, csp->csp_ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen, iv);
/*
* If the input IV is 12 bytes, append an explicit counter of 1.
*/
- if (crd->crd_alg == CRYPTO_AES_NIST_GCM_16 &&
- s->blkcipher.iv_len == 12) {
- *(uint32_t *)&s->blkcipher.iv[12] = htobe32(1);
- s->blkcipher.iv_len = AES_BLOCK_LEN;
- }
+ if (csp->csp_cipher_alg == CRYPTO_AES_NIST_GCM_16 &&
+ csp->csp_ivlen == 12)
+ *(uint32_t *)&iv[12] = htobe32(1);
- if (crd->crd_alg == CRYPTO_AES_XTS && s->blkcipher.iv_len != AES_BLOCK_LEN) {
- DPRINTF(NULL, "got ivlen != 16: %u\n", s->blkcipher.iv_len);
- if (s->blkcipher.iv_len < AES_BLOCK_LEN)
- memset(&s->blkcipher.iv[s->blkcipher.iv_len], 0,
- AES_BLOCK_LEN - s->blkcipher.iv_len);
- s->blkcipher.iv_len = AES_BLOCK_LEN;
- }
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS &&
+ csp->csp_ivlen < AES_BLOCK_LEN)
+ memset(&iv[csp->csp_ivlen], 0, AES_BLOCK_LEN - csp->csp_ivlen);
/* Reverse order of IV material for HW */
- INSECURE_DEBUG(NULL, "%s: IV: %16D len: %u\n", __func__,
- s->blkcipher.iv, " ", s->blkcipher.iv_len);
+ INSECURE_DEBUG(NULL, "%s: IV: %16D len: %u\n", __func__, iv, " ",
+ csp->csp_ivlen);
/*
* For unknown reasons, XTS mode expects the IV in the reverse byte
* order to every other AES mode.
*/
- if (crd->crd_alg != CRYPTO_AES_XTS)
- ccp_byteswap(s->blkcipher.iv, s->blkcipher.iv_len);
+ if (csp->csp_cipher_alg != CRYPTO_AES_XTS)
+ ccp_byteswap(iv, AES_BLOCK_LEN);
}
static int __must_check
@@ -1414,8 +1402,7 @@ ccp_do_pst_to_lsb(struct ccp_queue *qp, uint32_t lsbaddr, const void *src,
static int __must_check
ccp_do_xts(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crd, enum ccp_cipher_dir dir,
- const struct ccp_completion_ctx *cctx)
+ enum ccp_cipher_dir dir, const struct ccp_completion_ctx *cctx)
{
struct ccp_desc *desc;
device_t dev;
@@ -1427,7 +1414,8 @@ ccp_do_xts(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
dev = qp->cq_softc->dev;
for (i = 0; i < nitems(ccp_xts_unitsize_map); i++)
- if (ccp_xts_unitsize_map[i].cxu_size == crd->crd_len) {
+ if (ccp_xts_unitsize_map[i].cxu_size ==
+ crp->crp_payload_length) {
usize = ccp_xts_unitsize_map[i].cxu_id;
break;
}
@@ -1484,25 +1472,26 @@ ccp_do_xts(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
static int __must_check
ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
- struct cryptop *crp, struct cryptodesc *crd,
- const struct ccp_completion_ctx *cctx)
+ struct cryptop *crp, const struct ccp_completion_ctx *cctx)
{
+ const struct crypto_session_params *csp;
struct ccp_desc *desc;
char *keydata;
device_t dev;
enum ccp_cipher_dir dir;
- int error;
+ int error, iv_len;
size_t keydata_len;
unsigned i, j;
dev = qp->cq_softc->dev;
- if (s->blkcipher.key_len == 0 || crd->crd_len == 0) {
+ if (s->blkcipher.key_len == 0 || crp->crp_payload_length == 0) {
DPRINTF(dev, "%s: empty\n", __func__);
return (EINVAL);
}
- if ((crd->crd_len % AES_BLOCK_LEN) != 0) {
- DPRINTF(dev, "%s: len modulo: %d\n", __func__, crd->crd_len);
+ if ((crp->crp_payload_length % AES_BLOCK_LEN) != 0) {
+ DPRINTF(dev, "%s: len modulo: %d\n", __func__,
+ crp->crp_payload_length);
return (EINVAL);
}
@@ -1519,16 +1508,20 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
}
/* Gather IV/nonce data */
- ccp_collect_iv(s, crp, crd);
+ csp = crypto_get_params(crp->crp_session);
+ ccp_collect_iv(crp, csp, s->blkcipher.iv);
+ iv_len = csp->csp_ivlen;
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS)
+ iv_len = AES_BLOCK_LEN;
- if ((crd->crd_flags & CRD_F_ENCRYPT) != 0)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
dir = CCP_CIPHER_DIR_ENCRYPT;
else
dir = CCP_CIPHER_DIR_DECRYPT;
/* Set up passthrough op(s) to copy IV into LSB */
error = ccp_do_pst_to_lsb(qp, ccp_queue_lsb_address(qp, LSB_ENTRY_IV),
- s->blkcipher.iv, s->blkcipher.iv_len);
+ s->blkcipher.iv, iv_len);
if (error != 0)
return (error);
@@ -1539,15 +1532,16 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
keydata_len = 0;
keydata = NULL;
- switch (crd->crd_alg) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_XTS:
for (j = 0; j < nitems(ccp_xts_unitsize_map); j++)
- if (ccp_xts_unitsize_map[j].cxu_size == crd->crd_len)
+ if (ccp_xts_unitsize_map[j].cxu_size ==
+ crp->crp_payload_length)
break;
/* Input buffer must be a supported UnitSize */
if (j >= nitems(ccp_xts_unitsize_map)) {
device_printf(dev, "%s: rejected block size: %u\n",
- __func__, crd->crd_len);
+ __func__, crp->crp_payload_length);
return (EOPNOTSUPP);
}
/* FALLTHROUGH */
@@ -1560,14 +1554,14 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
INSECURE_DEBUG(dev, "%s: KEY(%zu): %16D\n", __func__, keydata_len,
keydata, " ");
- if (crd->crd_alg == CRYPTO_AES_XTS)
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS)
INSECURE_DEBUG(dev, "%s: KEY(XTS): %64D\n", __func__, keydata, " ");
/* Reverse order of key material for HW */
ccp_byteswap(keydata, keydata_len);
/* Store key material into LSB to avoid page boundaries */
- if (crd->crd_alg == CRYPTO_AES_XTS) {
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS) {
/*
* XTS mode uses 2 256-bit vectors for the primary key and the
* tweak key. For 128-bit keys, the vectors are zero-padded.
@@ -1611,7 +1605,7 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
*/
sglist_reset(qp->cq_sg_ulptx);
error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
- crd->crd_skip, crd->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error != 0)
return (error);
@@ -1623,8 +1617,8 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
if (ccp_queue_get_ring_space(qp) < qp->cq_sg_ulptx->sg_nseg)
return (EAGAIN);
- if (crd->crd_alg == CRYPTO_AES_XTS)
- return (ccp_do_xts(qp, s, crp, crd, dir, cctx));
+ if (csp->csp_cipher_alg == CRYPTO_AES_XTS)
+ return (ccp_do_xts(qp, s, crp, dir, cctx));
for (i = 0; i < qp->cq_sg_ulptx->sg_nseg; i++) {
struct sglist_seg *seg;
@@ -1647,7 +1641,7 @@ ccp_do_blkcipher(struct ccp_queue *qp, struct ccp_session *s,
desc->aes.encrypt = dir;
desc->aes.mode = s->blkcipher.cipher_mode;
desc->aes.type = s->blkcipher.cipher_type;
- if (crd->crd_alg == CRYPTO_AES_ICM)
+ if (csp->csp_cipher_alg == CRYPTO_AES_ICM)
/*
* Size of CTR value in bits, - 1. ICM mode uses all
* 128 bits as counter.
@@ -1684,38 +1678,29 @@ int __must_check
ccp_blkcipher(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp)
{
struct ccp_completion_ctx ctx;
- struct cryptodesc *crd;
-
- crd = crp->crp_desc;
ctx.callback_fn = ccp_blkcipher_done;
ctx.session = s;
ctx.callback_arg = crp;
- return (ccp_do_blkcipher(qp, s, crp, crd, &ctx));
+ return (ccp_do_blkcipher(qp, s, crp, &ctx));
}
static void
ccp_authenc_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
int error)
{
- struct cryptodesc *crda;
struct cryptop *crp;
- explicit_bzero(&s->blkcipher, sizeof(s->blkcipher));
+ explicit_bzero(&s->blkcipher.iv, sizeof(s->blkcipher.iv));
crp = vcrp;
- if (s->cipher_first)
- crda = crp->crp_desc->crd_next;
- else
- crda = crp->crp_desc;
- ccp_do_hmac_done(qp, s, crp, crda, error);
+ ccp_do_hmac_done(qp, s, crp, error);
}
int __must_check
-ccp_authenc(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccp_authenc(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp)
{
struct ccp_completion_ctx ctx;
int error;
@@ -1725,18 +1710,18 @@ ccp_authenc(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
ctx.callback_arg = crp;
/* Perform first operation */
- if (s->cipher_first)
- error = ccp_do_blkcipher(qp, s, crp, crde, NULL);
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
+ error = ccp_do_blkcipher(qp, s, crp, NULL);
else
- error = ccp_do_hmac(qp, s, crp, crda, NULL);
+ error = ccp_do_hmac(qp, s, crp, NULL);
if (error != 0)
return (error);
/* Perform second operation */
- if (s->cipher_first)
- error = ccp_do_hmac(qp, s, crp, crda, &ctx);
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
+ error = ccp_do_hmac(qp, s, crp, &ctx);
else
- error = ccp_do_blkcipher(qp, s, crp, crde, &ctx);
+ error = ccp_do_blkcipher(qp, s, crp, &ctx);
return (error);
}
@@ -1853,17 +1838,9 @@ ccp_gcm_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
int error)
{
char tag[GMAC_DIGEST_LEN];
- struct cryptodesc *crde, *crda;
struct cryptop *crp;
crp = vcrp;
- if (s->cipher_first) {
- crde = crp->crp_desc;
- crda = crp->crp_desc->crd_next;
- } else {
- crde = crp->crp_desc->crd_next;
- crda = crp->crp_desc;
- }
s->pending--;
@@ -1873,27 +1850,26 @@ ccp_gcm_done(struct ccp_queue *qp, struct ccp_session *s, void *vcrp,
}
/* Encrypt is done. Decrypt needs to verify tag. */
- if ((crde->crd_flags & CRD_F_ENCRYPT) != 0)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
goto out;
/* Copy in message tag. */
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(tag), tag);
+ crypto_copydata(crp, crp->crp_digest_start, s->gmac.hash_len, tag);
/* Verify tag against computed GMAC */
if (timingsafe_bcmp(tag, s->gmac.final_block, s->gmac.hash_len) != 0)
crp->crp_etype = EBADMSG;
out:
- explicit_bzero(&s->blkcipher, sizeof(s->blkcipher));
- explicit_bzero(&s->gmac, sizeof(s->gmac));
+ explicit_bzero(&s->blkcipher.iv, sizeof(s->blkcipher.iv));
+ explicit_bzero(&s->gmac.final_block, sizeof(s->gmac.final_block));
crypto_done(crp);
}
int __must_check
-ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp)
{
+ const struct crypto_session_params *csp;
struct ccp_completion_ctx ctx;
enum ccp_cipher_dir dir;
device_t dev;
@@ -1903,16 +1879,9 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
if (s->blkcipher.key_len == 0)
return (EINVAL);
- /*
- * AAD is only permitted before the cipher/plain text, not
- * after.
- */
- if (crda->crd_len + crda->crd_skip > crde->crd_len + crde->crd_skip)
- return (EINVAL);
-
dev = qp->cq_softc->dev;
- if ((crde->crd_flags & CRD_F_ENCRYPT) != 0)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
dir = CCP_CIPHER_DIR_ENCRYPT;
else
dir = CCP_CIPHER_DIR_DECRYPT;
@@ -1921,14 +1890,15 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
memset(s->blkcipher.iv, 0, sizeof(s->blkcipher.iv));
/* Gather IV data */
- ccp_collect_iv(s, crp, crde);
+ csp = crypto_get_params(crp->crp_session);
+ ccp_collect_iv(crp, csp, s->blkcipher.iv);
/* Reverse order of key material for HW */
ccp_byteswap(s->blkcipher.enckey, s->blkcipher.key_len);
/* Prepare input buffer of concatenated lengths for final GHASH */
- be64enc(s->gmac.final_block, (uint64_t)crda->crd_len * 8);
- be64enc(&s->gmac.final_block[8], (uint64_t)crde->crd_len * 8);
+ be64enc(s->gmac.final_block, (uint64_t)crp->crp_aad_length * 8);
+ be64enc(&s->gmac.final_block[8], (uint64_t)crp->crp_payload_length * 8);
/* Send IV + initial zero GHASH, key data, and lengths buffer to LSB */
error = ccp_do_pst_to_lsb(qp, ccp_queue_lsb_address(qp, LSB_ENTRY_IV),
@@ -1946,10 +1916,10 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
return (error);
/* First step - compute GHASH over AAD */
- if (crda->crd_len != 0) {
+ if (crp->crp_aad_length != 0) {
sglist_reset(qp->cq_sg_ulptx);
error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
- crda->crd_skip, crda->crd_len);
+ crp->crp_aad_start, crp->crp_aad_length);
if (error != 0)
return (error);
@@ -1971,7 +1941,7 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
/* Feed data piece by piece into GCTR */
sglist_reset(qp->cq_sg_ulptx);
error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
- crde->crd_skip, crde->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error != 0)
return (error);
@@ -1997,7 +1967,7 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
seg = &qp->cq_sg_ulptx->sg_segs[i];
error = ccp_do_gctr(qp, s, dir, seg,
- (i == 0 && crda->crd_len == 0),
+ (i == 0 && crp->crp_aad_length == 0),
i == (qp->cq_sg_ulptx->sg_nseg - 1));
if (error != 0)
return (error);
@@ -2005,7 +1975,7 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
/* Send just initial IV (not GHASH!) to LSB again */
error = ccp_do_pst_to_lsb(qp, ccp_queue_lsb_address(qp, LSB_ENTRY_IV),
- s->blkcipher.iv, s->blkcipher.iv_len);
+ s->blkcipher.iv, AES_BLOCK_LEN);
if (error != 0)
return (error);
@@ -2022,7 +1992,7 @@ ccp_gcm(struct ccp_queue *qp, struct ccp_session *s, struct cryptop *crp,
sglist_reset(qp->cq_sg_ulptx);
if (dir == CCP_CIPHER_DIR_ENCRYPT)
error = sglist_append_sglist(qp->cq_sg_ulptx, qp->cq_sg_crp,
- crda->crd_inject, s->gmac.hash_len);
+ crp->crp_digest_start, s->gmac.hash_len);
else
/*
* For decrypting, copy the computed tag out to our session
diff --git a/sys/crypto/via/padlock.c b/sys/crypto/via/padlock.c
index 66ef76bf05bb..7fc8a2833f8e 100644
--- a/sys/crypto/via/padlock.c
+++ b/sys/crypto/via/padlock.c
@@ -60,7 +60,9 @@ struct padlock_softc {
int32_t sc_cid;
};
-static int padlock_newsession(device_t, crypto_session_t cses, struct cryptoini *cri);
+static int padlock_probesession(device_t, const struct crypto_session_params *);
+static int padlock_newsession(device_t, crypto_session_t cses,
+ const struct crypto_session_params *);
static void padlock_freesession(device_t, crypto_session_t cses);
static void padlock_freesession_one(struct padlock_softc *sc,
struct padlock_session *ses);
@@ -123,13 +125,6 @@ padlock_attach(device_t dev)
return (ENOMEM);
}
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_RIPEMD160_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA2_256_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA2_384_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA2_512_HMAC, 0, 0);
return (0);
}
@@ -143,63 +138,65 @@ padlock_detach(device_t dev)
}
static int
-padlock_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+padlock_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct padlock_softc *sc = device_get_softc(dev);
- struct padlock_session *ses = NULL;
- struct cryptoini *encini, *macini;
- struct thread *td;
- int error;
- if (cri == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
- encini = macini = NULL;
- for (; cri != NULL; cri = cri->cri_next) {
- switch (cri->cri_alg) {
- case CRYPTO_NULL_HMAC:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_RIPEMD160_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- if (macini != NULL)
- return (EINVAL);
- macini = cri;
- break;
+ /*
+ * We only support HMAC algorithms to be able to work with
+ * ipsec(4), so if we are asked only for authentication without
+ * encryption, don't pretend we can accelerate it.
+ *
+ * XXX: For CPUs with SHA instructions we should probably
+ * permit CSP_MODE_DIGEST so that those can be tested.
+ */
+ switch (csp->csp_mode) {
+ case CSP_MODE_ETA:
+ if (!padlock_hash_check(csp))
+ return (EINVAL);
+ /* FALLTHROUGH */
+ case CSP_MODE_CIPHER:
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_CBC:
- if (encini != NULL)
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
return (EINVAL);
- encini = cri;
break;
default:
return (EINVAL);
}
+ break;
+ default:
+ return (EINVAL);
}
- /*
- * We only support HMAC algorithms to be able to work with
- * ipsec(4), so if we are asked only for authentication without
- * encryption, don't pretend we can accellerate it.
- */
- if (encini == NULL)
- return (EINVAL);
+ return (CRYPTODEV_PROBE_ACCEL_SOFTWARE);
+}
+
+static int
+padlock_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct padlock_softc *sc = device_get_softc(dev);
+ struct padlock_session *ses = NULL;
+ struct thread *td;
+ int error;
ses = crypto_get_driver_session(cses);
ses->ses_fpu_ctx = fpu_kern_alloc_ctx(FPU_KERN_NORMAL);
- error = padlock_cipher_setup(ses, encini);
+ error = padlock_cipher_setup(ses, csp);
if (error != 0) {
padlock_freesession_one(sc, ses);
return (error);
}
- if (macini != NULL) {
+ if (csp->csp_mode == CSP_MODE_ETA) {
td = curthread;
fpu_kern_enter(td, ses->ses_fpu_ctx, FPU_KERN_NORMAL |
FPU_KERN_KTHR);
- error = padlock_hash_setup(ses, macini);
+ error = padlock_hash_setup(ses, csp);
fpu_kern_leave(td, ses->ses_fpu_ctx);
if (error != 0) {
padlock_freesession_one(sc, ses);
@@ -231,68 +228,34 @@ padlock_freesession(device_t dev, crypto_session_t cses)
static int
padlock_process(device_t dev, struct cryptop *crp, int hint __unused)
{
- struct padlock_session *ses = NULL;
- struct cryptodesc *crd, *enccrd, *maccrd;
- int error = 0;
-
- enccrd = maccrd = NULL;
-
- /* Sanity check. */
- if (crp == NULL)
- return (EINVAL);
-
- if (crp->crp_callback == NULL || crp->crp_desc == NULL) {
- error = EINVAL;
- goto out;
- }
+ const struct crypto_session_params *csp;
+ struct padlock_session *ses;
+ int error;
- for (crd = crp->crp_desc; crd != NULL; crd = crd->crd_next) {
- switch (crd->crd_alg) {
- case CRYPTO_NULL_HMAC:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_RIPEMD160_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- if (maccrd != NULL) {
- error = EINVAL;
- goto out;
- }
- maccrd = crd;
- break;
- case CRYPTO_AES_CBC:
- if (enccrd != NULL) {
- error = EINVAL;
- goto out;
- }
- enccrd = crd;
- break;
- default:
- return (EINVAL);
- }
- }
- if (enccrd == NULL || (enccrd->crd_len % AES_BLOCK_LEN) != 0) {
+ if ((crp->crp_payload_length % AES_BLOCK_LEN) != 0) {
error = EINVAL;
goto out;
}
ses = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
- /* Perform data authentication if requested before encryption. */
- if (maccrd != NULL && maccrd->crd_next == enccrd) {
- error = padlock_hash_process(ses, maccrd, crp);
+ /* Perform data authentication if requested before decryption. */
+ if (csp->csp_mode == CSP_MODE_ETA &&
+ !CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ error = padlock_hash_process(ses, crp, csp);
if (error != 0)
goto out;
}
- error = padlock_cipher_process(ses, enccrd, crp);
+ error = padlock_cipher_process(ses, crp, csp);
if (error != 0)
goto out;
/* Perform data authentication if requested after encryption. */
- if (maccrd != NULL && enccrd->crd_next == maccrd) {
- error = padlock_hash_process(ses, maccrd, crp);
+ if (csp->csp_mode == CSP_MODE_ETA &&
+ CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ error = padlock_hash_process(ses, crp, csp);
if (error != 0)
goto out;
}
@@ -320,6 +283,7 @@ static device_method_t padlock_methods[] = {
DEVMETHOD(device_attach, padlock_attach),
DEVMETHOD(device_detach, padlock_detach),
+ DEVMETHOD(cryptodev_probesession, padlock_probesession),
DEVMETHOD(cryptodev_newsession, padlock_newsession),
DEVMETHOD(cryptodev_freesession,padlock_freesession),
DEVMETHOD(cryptodev_process, padlock_process),
diff --git a/sys/crypto/via/padlock.h b/sys/crypto/via/padlock.h
index 3b75238b98a3..9e0d28abf3bc 100644
--- a/sys/crypto/via/padlock.h
+++ b/sys/crypto/via/padlock.h
@@ -68,7 +68,6 @@ struct padlock_session {
union padlock_cw ses_cw __aligned(16);
uint32_t ses_ekey[4 * (RIJNDAEL_MAXNR + 1) + 4] __aligned(16); /* 128 bit aligned */
uint32_t ses_dkey[4 * (RIJNDAEL_MAXNR + 1) + 4] __aligned(16); /* 128 bit aligned */
- uint8_t ses_iv[16] __aligned(16); /* 128 bit aligned */
struct auth_hash *ses_axf;
uint8_t *ses_ictx;
uint8_t *ses_octx;
@@ -79,13 +78,14 @@ struct padlock_session {
#define PADLOCK_ALIGN(p) (void *)(roundup2((uintptr_t)(p), 16))
int padlock_cipher_setup(struct padlock_session *ses,
- struct cryptoini *encini);
+ const struct crypto_session_params *csp);
int padlock_cipher_process(struct padlock_session *ses,
- struct cryptodesc *enccrd, struct cryptop *crp);
+ struct cryptop *crp, const struct crypto_session_params *csp);
+bool padlock_hash_check(const struct crypto_session_params *csp);
int padlock_hash_setup(struct padlock_session *ses,
- struct cryptoini *macini);
+ const struct crypto_session_params *csp);
int padlock_hash_process(struct padlock_session *ses,
- struct cryptodesc *maccrd, struct cryptop *crp);
+ struct cryptop *crp, const struct crypto_session_params *csp);
void padlock_hash_free(struct padlock_session *ses);
#endif /* !_PADLOCK_H_ */
diff --git a/sys/crypto/via/padlock_cipher.c b/sys/crypto/via/padlock_cipher.c
index 70d28d30fece..04cd0fbb575e 100644
--- a/sys/crypto/via/padlock_cipher.c
+++ b/sys/crypto/via/padlock_cipher.c
@@ -98,7 +98,7 @@ padlock_cbc(void *in, void *out, size_t count, void *key, union padlock_cw *cw,
}
static void
-padlock_cipher_key_setup(struct padlock_session *ses, caddr_t key, int klen)
+padlock_cipher_key_setup(struct padlock_session *ses, const void *key, int klen)
{
union padlock_cw *cw;
int i;
@@ -106,8 +106,8 @@ padlock_cipher_key_setup(struct padlock_session *ses, caddr_t key, int klen)
cw = &ses->ses_cw;
if (cw->cw_key_generation == PADLOCK_KEY_GENERATION_SW) {
/* Build expanded keys for both directions */
- rijndaelKeySetupEnc(ses->ses_ekey, key, klen);
- rijndaelKeySetupDec(ses->ses_dkey, key, klen);
+ rijndaelKeySetupEnc(ses->ses_ekey, key, klen * 8);
+ rijndaelKeySetupDec(ses->ses_dkey, key, klen * 8);
for (i = 0; i < 4 * (RIJNDAEL_MAXNR + 1); i++) {
ses->ses_ekey[i] = ntohl(ses->ses_ekey[i]);
ses->ses_dkey[i] = ntohl(ses->ses_dkey[i]);
@@ -119,12 +119,13 @@ padlock_cipher_key_setup(struct padlock_session *ses, caddr_t key, int klen)
}
int
-padlock_cipher_setup(struct padlock_session *ses, struct cryptoini *encini)
+padlock_cipher_setup(struct padlock_session *ses,
+ const struct crypto_session_params *csp)
{
union padlock_cw *cw;
- if (encini->cri_klen != 128 && encini->cri_klen != 192 &&
- encini->cri_klen != 256) {
+ if (csp->csp_cipher_klen != 16 && csp->csp_cipher_klen != 25 &&
+ csp->csp_cipher_klen != 32) {
return (EINVAL);
}
@@ -133,7 +134,7 @@ padlock_cipher_setup(struct padlock_session *ses, struct cryptoini *encini)
cw->cw_algorithm_type = PADLOCK_ALGORITHM_TYPE_AES;
cw->cw_key_generation = PADLOCK_KEY_GENERATION_SW;
cw->cw_intermediate = 0;
- switch (encini->cri_klen) {
+ switch (csp->csp_cipher_klen * 8) {
case 128:
cw->cw_round_count = PADLOCK_ROUND_COUNT_AES128;
cw->cw_key_size = PADLOCK_KEY_SIZE_128;
@@ -151,12 +152,10 @@ padlock_cipher_setup(struct padlock_session *ses, struct cryptoini *encini)
cw->cw_key_size = PADLOCK_KEY_SIZE_256;
break;
}
- if (encini->cri_key != NULL) {
- padlock_cipher_key_setup(ses, encini->cri_key,
- encini->cri_klen);
+ if (csp->csp_cipher_key != NULL) {
+ padlock_cipher_key_setup(ses, csp->csp_cipher_key,
+ csp->csp_cipher_klen);
}
-
- arc4rand(ses->ses_iv, sizeof(ses->ses_iv), 0);
return (0);
}
@@ -166,56 +165,60 @@ padlock_cipher_setup(struct padlock_session *ses, struct cryptoini *encini)
* If it isn't, new buffer is allocated.
*/
static u_char *
-padlock_cipher_alloc(struct cryptodesc *enccrd, struct cryptop *crp,
- int *allocated)
+padlock_cipher_alloc(struct cryptop *crp, int *allocated)
{
u_char *addr;
- if (crp->crp_flags & CRYPTO_F_IMBUF)
- goto alloc;
- else {
- if (crp->crp_flags & CRYPTO_F_IOV) {
- struct uio *uio;
- struct iovec *iov;
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ break;
+ case CRYPTO_BUF_UIO: {
+ struct uio *uio;
+ struct iovec *iov;
- uio = (struct uio *)crp->crp_buf;
- if (uio->uio_iovcnt != 1)
- goto alloc;
- iov = uio->uio_iov;
- addr = (u_char *)iov->iov_base + enccrd->crd_skip;
- } else {
- addr = (u_char *)crp->crp_buf;
- }
+ uio = crp->crp_uio;
+ if (uio->uio_iovcnt != 1)
+ break;
+ iov = uio->uio_iov;
+ addr = (u_char *)iov->iov_base + crp->crp_payload_start;
if (((uintptr_t)addr & 0xf) != 0) /* 16 bytes aligned? */
- goto alloc;
+ break;
*allocated = 0;
return (addr);
}
-alloc:
+ case CRYPTO_BUF_CONTIG:
+ addr = (u_char *)crp->crp_buf + crp->crp_payload_start;
+ if (((uintptr_t)addr & 0xf) != 0) /* 16 bytes aligned? */
+ break;
+ *allocated = 0;
+ return (addr);
+ }
+
*allocated = 1;
- addr = malloc(enccrd->crd_len + 16, M_PADLOCK, M_NOWAIT);
+ addr = malloc(crp->crp_payload_length + 16, M_PADLOCK, M_NOWAIT);
return (addr);
}
int
-padlock_cipher_process(struct padlock_session *ses, struct cryptodesc *enccrd,
- struct cryptop *crp)
+padlock_cipher_process(struct padlock_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp)
{
union padlock_cw *cw;
struct thread *td;
u_char *buf, *abuf;
uint32_t *key;
+ uint8_t iv[AES_BLOCK_LEN] __aligned(16);
int allocated;
- buf = padlock_cipher_alloc(enccrd, crp, &allocated);
+ buf = padlock_cipher_alloc(crp, &allocated);
if (buf == NULL)
return (ENOMEM);
/* Buffer has to be 16 bytes aligned. */
abuf = PADLOCK_ALIGN(buf);
- if ((enccrd->crd_flags & CRD_F_KEY_EXPLICIT) != 0) {
- padlock_cipher_key_setup(ses, enccrd->crd_key,
- enccrd->crd_klen);
+ if (crp->crp_cipher_key != NULL) {
+ padlock_cipher_key_setup(ses, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
}
cw = &ses->ses_cw;
@@ -223,52 +226,39 @@ padlock_cipher_process(struct padlock_session *ses, struct cryptodesc *enccrd,
cw->cw_filler1 = 0;
cw->cw_filler2 = 0;
cw->cw_filler3 = 0;
- if ((enccrd->crd_flags & CRD_F_ENCRYPT) != 0) {
+
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, AES_BLOCK_LEN, 0);
+ crypto_copyback(crp, crp->crp_iv_start, AES_BLOCK_LEN, iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, AES_BLOCK_LEN);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, AES_BLOCK_LEN, iv);
+
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
cw->cw_direction = PADLOCK_DIRECTION_ENCRYPT;
key = ses->ses_ekey;
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, ses->ses_iv, AES_BLOCK_LEN);
-
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, AES_BLOCK_LEN, ses->ses_iv);
- }
} else {
cw->cw_direction = PADLOCK_DIRECTION_DECRYPT;
key = ses->ses_dkey;
- if ((enccrd->crd_flags & CRD_F_IV_EXPLICIT) != 0)
- bcopy(enccrd->crd_iv, ses->ses_iv, AES_BLOCK_LEN);
- else {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, AES_BLOCK_LEN, ses->ses_iv);
- }
}
if (allocated) {
- crypto_copydata(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, abuf);
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, abuf);
}
td = curthread;
fpu_kern_enter(td, ses->ses_fpu_ctx, FPU_KERN_NORMAL | FPU_KERN_KTHR);
- padlock_cbc(abuf, abuf, enccrd->crd_len / AES_BLOCK_LEN, key, cw,
- ses->ses_iv);
+ padlock_cbc(abuf, abuf, crp->crp_payload_length / AES_BLOCK_LEN, key,
+ cw, iv);
fpu_kern_leave(td, ses->ses_fpu_ctx);
if (allocated) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, enccrd->crd_skip,
- enccrd->crd_len, abuf);
- }
-
- /* copy out last block for use as next session IV */
- if ((enccrd->crd_flags & CRD_F_ENCRYPT) != 0) {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_skip + enccrd->crd_len - AES_BLOCK_LEN,
- AES_BLOCK_LEN, ses->ses_iv);
- }
+ crypto_copyback(crp, crp->crp_payload_start,
+ crp->crp_payload_length, abuf);
- if (allocated) {
- bzero(buf, enccrd->crd_len + 16);
+ explicit_bzero(buf, crp->crp_payload_length + 16);
free(buf, M_PADLOCK);
}
return (0);
diff --git a/sys/crypto/via/padlock_hash.c b/sys/crypto/via/padlock_hash.c
index d6c9940208e2..e28e8122c6d8 100644
--- a/sys/crypto/via/padlock_hash.c
+++ b/sys/crypto/via/padlock_hash.c
@@ -44,7 +44,6 @@ __FBSDID("$FreeBSD$");
#include <machine/pcb.h>
#include <opencrypto/cryptodev.h>
-#include <opencrypto/cryptosoft.h> /* for hmac_ipad_buffer and hmac_opad_buffer */
#include <opencrypto/xform.h>
#include <crypto/via/padlock.h>
@@ -249,12 +248,11 @@ padlock_free_ctx(struct auth_hash *axf, void *ctx)
}
static void
-padlock_hash_key_setup(struct padlock_session *ses, caddr_t key, int klen)
+padlock_hash_key_setup(struct padlock_session *ses, const uint8_t *key,
+ int klen)
{
struct auth_hash *axf;
- int i;
- klen /= 8;
axf = ses->ses_axf;
/*
@@ -265,32 +263,17 @@ padlock_hash_key_setup(struct padlock_session *ses, caddr_t key, int klen)
padlock_free_ctx(axf, ses->ses_ictx);
padlock_free_ctx(axf, ses->ses_octx);
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_IPAD_VAL;
-
- axf->Init(ses->ses_ictx);
- axf->Update(ses->ses_ictx, key, klen);
- axf->Update(ses->ses_ictx, hmac_ipad_buffer, axf->blocksize - klen);
-
- for (i = 0; i < klen; i++)
- key[i] ^= (HMAC_IPAD_VAL ^ HMAC_OPAD_VAL);
-
- axf->Init(ses->ses_octx);
- axf->Update(ses->ses_octx, key, klen);
- axf->Update(ses->ses_octx, hmac_opad_buffer, axf->blocksize - klen);
-
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_OPAD_VAL;
+ hmac_init_ipad(axf, key, klen, ses->ses_ictx);
+ hmac_init_opad(axf, key, klen, ses->ses_octx);
}
/*
* Compute keyed-hash authenticator.
*/
static int
-padlock_authcompute(struct padlock_session *ses, struct cryptodesc *crd,
- caddr_t buf, int flags)
+padlock_authcompute(struct padlock_session *ses, struct cryptop *crp)
{
- u_char hash[HASH_MAX_LEN];
+ u_char hash[HASH_MAX_LEN], hash2[HASH_MAX_LEN];
struct auth_hash *axf;
union authctx ctx;
int error;
@@ -298,7 +281,14 @@ padlock_authcompute(struct padlock_session *ses, struct cryptodesc *crd,
axf = ses->ses_axf;
padlock_copy_ctx(axf, ses->ses_ictx, &ctx);
- error = crypto_apply(flags, buf, crd->crd_skip, crd->crd_len,
+ error = crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length,
+ (int (*)(void *, void *, unsigned int))axf->Update, (caddr_t)&ctx);
+ if (error != 0) {
+ padlock_free_ctx(axf, &ctx);
+ return (error);
+ }
+ error = crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length,
(int (*)(void *, void *, unsigned int))axf->Update, (caddr_t)&ctx);
if (error != 0) {
padlock_free_ctx(axf, &ctx);
@@ -310,48 +300,75 @@ padlock_authcompute(struct padlock_session *ses, struct cryptodesc *crd,
axf->Update(&ctx, hash, axf->hashsize);
axf->Final(hash, &ctx);
- /* Inject the authentication data */
- crypto_copyback(flags, buf, crd->crd_inject,
- ses->ses_mlen == 0 ? axf->hashsize : ses->ses_mlen, hash);
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, ses->ses_mlen,
+ hash2);
+ if (timingsafe_bcmp(hash, hash2, ses->ses_mlen) != 0)
+ return (EBADMSG);
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, ses->ses_mlen,
+ hash);
return (0);
}
-int
-padlock_hash_setup(struct padlock_session *ses, struct cryptoini *macini)
+/* Find software structure which describes HMAC algorithm. */
+static struct auth_hash *
+padlock_hash_lookup(int alg)
{
+ struct auth_hash *axf;
- ses->ses_mlen = macini->cri_mlen;
-
- /* Find software structure which describes HMAC algorithm. */
- switch (macini->cri_alg) {
+ switch (alg) {
case CRYPTO_NULL_HMAC:
- ses->ses_axf = &auth_hash_null;
+ axf = &auth_hash_null;
break;
case CRYPTO_MD5_HMAC:
- ses->ses_axf = &auth_hash_hmac_md5;
+ axf = &auth_hash_hmac_md5;
break;
case CRYPTO_SHA1_HMAC:
if ((via_feature_xcrypt & VIA_HAS_SHA) != 0)
- ses->ses_axf = &padlock_hmac_sha1;
+ axf = &padlock_hmac_sha1;
else
- ses->ses_axf = &auth_hash_hmac_sha1;
+ axf = &auth_hash_hmac_sha1;
break;
case CRYPTO_RIPEMD160_HMAC:
- ses->ses_axf = &auth_hash_hmac_ripemd_160;
+ axf = &auth_hash_hmac_ripemd_160;
break;
case CRYPTO_SHA2_256_HMAC:
if ((via_feature_xcrypt & VIA_HAS_SHA) != 0)
- ses->ses_axf = &padlock_hmac_sha256;
+ axf = &padlock_hmac_sha256;
else
- ses->ses_axf = &auth_hash_hmac_sha2_256;
+ axf = &auth_hash_hmac_sha2_256;
break;
case CRYPTO_SHA2_384_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha2_384;
+ axf = &auth_hash_hmac_sha2_384;
break;
case CRYPTO_SHA2_512_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha2_512;
+ axf = &auth_hash_hmac_sha2_512;
+ break;
+ default:
+ axf = NULL;
break;
}
+ return (axf);
+}
+
+bool
+padlock_hash_check(const struct crypto_session_params *csp)
+{
+
+ return (padlock_hash_lookup(csp->csp_auth_alg) != NULL);
+}
+
+int
+padlock_hash_setup(struct padlock_session *ses,
+ const struct crypto_session_params *csp)
+{
+
+ ses->ses_axf = padlock_hash_lookup(csp->csp_auth_alg);
+ if (csp->csp_auth_mlen == 0)
+ ses->ses_mlen = ses->ses_axf->hashsize;
+ else
+ ses->ses_mlen = csp->csp_auth_mlen;
/* Allocate memory for HMAC inner and outer contexts. */
ses->ses_ictx = malloc(ses->ses_axf->ctxsize, M_PADLOCK,
@@ -362,26 +379,27 @@ padlock_hash_setup(struct padlock_session *ses, struct cryptoini *macini)
return (ENOMEM);
/* Setup key if given. */
- if (macini->cri_key != NULL) {
- padlock_hash_key_setup(ses, macini->cri_key,
- macini->cri_klen);
+ if (csp->csp_auth_key != NULL) {
+ padlock_hash_key_setup(ses, csp->csp_auth_key,
+ csp->csp_auth_klen);
}
return (0);
}
int
-padlock_hash_process(struct padlock_session *ses, struct cryptodesc *maccrd,
- struct cryptop *crp)
+padlock_hash_process(struct padlock_session *ses, struct cryptop *crp,
+ const struct crypto_session_params *csp)
{
struct thread *td;
int error;
td = curthread;
fpu_kern_enter(td, ses->ses_fpu_ctx, FPU_KERN_NORMAL | FPU_KERN_KTHR);
- if ((maccrd->crd_flags & CRD_F_KEY_EXPLICIT) != 0)
- padlock_hash_key_setup(ses, maccrd->crd_key, maccrd->crd_klen);
+ if (crp->crp_auth_key != NULL)
+ padlock_hash_key_setup(ses, crp->crp_auth_key,
+ csp->csp_auth_klen);
- error = padlock_authcompute(ses, maccrd, crp->crp_buf, crp->crp_flags);
+ error = padlock_authcompute(ses, crp);
fpu_kern_leave(td, ses->ses_fpu_ctx);
return (error);
}
diff --git a/sys/dev/cesa/cesa.c b/sys/dev/cesa/cesa.c
index 6cbac049bb42..d4e056d2a09b 100644
--- a/sys/dev/cesa/cesa.c
+++ b/sys/dev/cesa/cesa.c
@@ -69,6 +69,7 @@ __FBSDID("$FreeBSD$");
#include <crypto/sha2/sha256.h>
#include <crypto/rijndael/rijndael.h>
#include <opencrypto/cryptodev.h>
+#include <opencrypto/xform.h>
#include "cryptodev_if.h"
#include <arm/mv/mvreg.h>
@@ -80,7 +81,10 @@ static int cesa_attach(device_t);
static int cesa_attach_late(device_t);
static int cesa_detach(device_t);
static void cesa_intr(void *);
-static int cesa_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int cesa_probesession(device_t,
+ const struct crypto_session_params *);
+static int cesa_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static int cesa_process(device_t, struct cryptop *, int);
static struct resource_spec cesa_res_spec[] = {
@@ -97,6 +101,7 @@ static device_method_t cesa_methods[] = {
DEVMETHOD(device_detach, cesa_detach),
/* Crypto device methods */
+ DEVMETHOD(cryptodev_probesession, cesa_probesession),
DEVMETHOD(cryptodev_newsession, cesa_newsession),
DEVMETHOD(cryptodev_process, cesa_process),
@@ -417,78 +422,68 @@ cesa_append_packet(struct cesa_softc *sc, struct cesa_request *cr,
return (0);
}
-static int
+static void
cesa_set_mkey(struct cesa_session *cs, int alg, const uint8_t *mkey, int mklen)
{
- uint8_t ipad[CESA_MAX_HMAC_BLOCK_LEN];
- uint8_t opad[CESA_MAX_HMAC_BLOCK_LEN];
- SHA1_CTX sha1ctx;
- SHA256_CTX sha256ctx;
- MD5_CTX md5ctx;
+ union authctx auth_ctx;
uint32_t *hout;
uint32_t *hin;
int i;
- memset(ipad, HMAC_IPAD_VAL, CESA_MAX_HMAC_BLOCK_LEN);
- memset(opad, HMAC_OPAD_VAL, CESA_MAX_HMAC_BLOCK_LEN);
- for (i = 0; i < mklen; i++) {
- ipad[i] ^= mkey[i];
- opad[i] ^= mkey[i];
- }
-
hin = (uint32_t *)cs->cs_hiv_in;
hout = (uint32_t *)cs->cs_hiv_out;
switch (alg) {
case CRYPTO_MD5_HMAC:
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, ipad, MD5_BLOCK_LEN);
- memcpy(hin, md5ctx.state, sizeof(md5ctx.state));
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, opad, MD5_BLOCK_LEN);
- memcpy(hout, md5ctx.state, sizeof(md5ctx.state));
+ hmac_init_ipad(&auth_hash_hmac_md5, mkey, mklen, &auth_ctx);
+ memcpy(hin, auth_ctx.md5ctx.state,
+ sizeof(auth_ctx.md5ctx.state));
+ hmac_init_opad(&auth_hash_hmac_md5, mkey, mklen, &auth_ctx);
+ memcpy(hout, auth_ctx.md5ctx.state,
+ sizeof(auth_ctx.md5ctx.state));
break;
case CRYPTO_SHA1_HMAC:
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, ipad, SHA1_BLOCK_LEN);
- memcpy(hin, sha1ctx.h.b32, sizeof(sha1ctx.h.b32));
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, opad, SHA1_BLOCK_LEN);
- memcpy(hout, sha1ctx.h.b32, sizeof(sha1ctx.h.b32));
+ hmac_init_ipad(&auth_hash_hmac_sha1, mkey, mklen, &auth_ctx);
+ memcpy(hin, auth_ctx.sha1ctx.h.b32,
+ sizeof(auth_ctx.sha1ctx.h.b32));
+ hmac_init_opad(&auth_hash_hmac_sha1, mkey, mklen, &auth_ctx);
+ memcpy(hout, auth_ctx.sha1ctx.h.b32,
+ sizeof(auth_ctx.sha1ctx.h.b32));
break;
case CRYPTO_SHA2_256_HMAC:
- SHA256_Init(&sha256ctx);
- SHA256_Update(&sha256ctx, ipad, SHA2_256_BLOCK_LEN);
- memcpy(hin, sha256ctx.state, sizeof(sha256ctx.state));
- SHA256_Init(&sha256ctx);
- SHA256_Update(&sha256ctx, opad, SHA2_256_BLOCK_LEN);
- memcpy(hout, sha256ctx.state, sizeof(sha256ctx.state));
+ hmac_init_ipad(&auth_hash_hmac_sha2_256, mkey, mklen,
+ &auth_ctx);
+ memcpy(hin, auth_ctx.sha256ctx.state,
+ sizeof(auth_ctx.sha256ctx.state));
+ hmac_init_opad(&auth_hash_hmac_sha2_256, mkey, mklen,
+ &auth_ctx);
+ memcpy(hout, auth_ctx.sha256ctx.state,
+ sizeof(auth_ctx.sha256ctx.state));
break;
default:
- return (EINVAL);
+ panic("shouldn't get here");
}
for (i = 0; i < CESA_MAX_HASH_LEN / sizeof(uint32_t); i++) {
hin[i] = htobe32(hin[i]);
hout[i] = htobe32(hout[i]);
}
-
- return (0);
}
static int
-cesa_prep_aes_key(struct cesa_session *cs)
+cesa_prep_aes_key(struct cesa_session *cs,
+ const struct crypto_session_params *csp)
{
uint32_t ek[4 * (RIJNDAEL_MAXNR + 1)];
uint32_t *dkey;
int i;
- rijndaelKeySetupEnc(ek, cs->cs_key, cs->cs_klen * 8);
+ rijndaelKeySetupEnc(ek, cs->cs_key, csp->csp_cipher_klen * 8);
cs->cs_config &= ~CESA_CSH_AES_KLEN_MASK;
dkey = (uint32_t *)cs->cs_aes_dkey;
- switch (cs->cs_klen) {
+ switch (csp->csp_cipher_klen) {
case 16:
cs->cs_config |= CESA_CSH_AES_KLEN_128;
for (i = 0; i < 4; i++)
@@ -515,22 +510,6 @@ cesa_prep_aes_key(struct cesa_session *cs)
return (0);
}
-static int
-cesa_is_hash(int alg)
-{
-
- switch (alg) {
- case CRYPTO_MD5:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- return (1);
- default:
- return (0);
- }
-}
-
static void
cesa_start_packet(struct cesa_packet *cp, unsigned int size)
{
@@ -584,6 +563,7 @@ cesa_create_chain_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error)
unsigned int skip, len;
struct cesa_sa_desc *csd;
struct cesa_request *cr;
+ struct cryptop *crp;
struct cesa_softc *sc;
struct cesa_packet cp;
bus_dma_segment_t seg;
@@ -593,73 +573,107 @@ cesa_create_chain_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error)
cci = arg;
sc = cci->cci_sc;
cr = cci->cci_cr;
+ crp = cr->cr_crp;
if (error) {
cci->cci_error = error;
return;
}
- elen = cci->cci_enc ? cci->cci_enc->crd_len : 0;
- eskip = cci->cci_enc ? cci->cci_enc->crd_skip : 0;
- mlen = cci->cci_mac ? cci->cci_mac->crd_len : 0;
- mskip = cci->cci_mac ? cci->cci_mac->crd_skip : 0;
-
- if (elen && mlen &&
- ((eskip > mskip && ((eskip - mskip) & (cr->cr_cs->cs_ivlen - 1))) ||
- (mskip > eskip && ((mskip - eskip) & (cr->cr_cs->cs_mblen - 1))) ||
- (eskip > (mskip + mlen)) || (mskip > (eskip + elen)))) {
+ /*
+ * Only do a combined op if the AAD is adjacent to the payload
+ * and the AAD length is a multiple of the IV length. The
+ * checks against 'config' are to avoid recursing when the
+ * logic below invokes separate operations.
+ */
+ config = cci->cci_config;
+ if (((config & CESA_CSHD_OP_MASK) == CESA_CSHD_MAC_AND_ENC ||
+ (config & CESA_CSHD_OP_MASK) == CESA_CSHD_ENC_AND_MAC) &&
+ crp->crp_aad_length != 0 &&
+ (crp->crp_aad_length & (cr->cr_cs->cs_ivlen - 1)) != 0) {
/*
* Data alignment in the request does not meet CESA requiremnts
* for combined encryption/decryption and hashing. We have to
* split the request to separate operations and process them
* one by one.
*/
- config = cci->cci_config;
if ((config & CESA_CSHD_OP_MASK) == CESA_CSHD_MAC_AND_ENC) {
config &= ~CESA_CSHD_OP_MASK;
cci->cci_config = config | CESA_CSHD_MAC;
- cci->cci_enc = NULL;
- cci->cci_mac = cr->cr_mac;
- cesa_create_chain_cb(cci, segs, nseg, cci->cci_error);
+ cesa_create_chain_cb(cci, segs, nseg, 0);
cci->cci_config = config | CESA_CSHD_ENC;
- cci->cci_enc = cr->cr_enc;
- cci->cci_mac = NULL;
- cesa_create_chain_cb(cci, segs, nseg, cci->cci_error);
+ cesa_create_chain_cb(cci, segs, nseg, 0);
} else {
config &= ~CESA_CSHD_OP_MASK;
cci->cci_config = config | CESA_CSHD_ENC;
- cci->cci_enc = cr->cr_enc;
- cci->cci_mac = NULL;
- cesa_create_chain_cb(cci, segs, nseg, cci->cci_error);
+ cesa_create_chain_cb(cci, segs, nseg, 0);
cci->cci_config = config | CESA_CSHD_MAC;
- cci->cci_enc = NULL;
- cci->cci_mac = cr->cr_mac;
- cesa_create_chain_cb(cci, segs, nseg, cci->cci_error);
+ cesa_create_chain_cb(cci, segs, nseg, 0);
}
return;
}
+ mskip = mlen = eskip = elen = 0;
+
+ if (crp->crp_aad_length == 0) {
+ skip = crp->crp_payload_start;
+ len = crp->crp_payload_length;
+ switch (config & CESA_CSHD_OP_MASK) {
+ case CESA_CSHD_ENC:
+ eskip = skip;
+ elen = len;
+ break;
+ case CESA_CSHD_MAC:
+ mskip = skip;
+ mlen = len;
+ break;
+ default:
+ eskip = skip;
+ elen = len;
+ mskip = skip;
+ mlen = len;
+ break;
+ }
+ } else {
+ /*
+ * For an encryption-only separate request, only
+ * process the payload. For combined requests and
+ * hash-only requests, process the entire region.
+ */
+ switch (config & CESA_CSHD_OP_MASK) {
+ case CESA_CSHD_ENC:
+ skip = crp->crp_payload_start;
+ len = crp->crp_payload_length;
+ eskip = skip;
+ elen = len;
+ break;
+ case CESA_CSHD_MAC:
+ skip = crp->crp_aad_start;
+ len = crp->crp_aad_length + crp->crp_payload_length;
+ mskip = skip;
+ mlen = len;
+ break;
+ default:
+ skip = crp->crp_aad_start;
+ len = crp->crp_aad_length + crp->crp_payload_length;
+ mskip = skip;
+ mlen = len;
+ eskip = crp->crp_payload_start;
+ elen = crp->crp_payload_length;
+ break;
+ }
+ }
+
tmlen = mlen;
fragmented = 0;
mpsize = CESA_MAX_PACKET_SIZE;
mpsize &= ~((cr->cr_cs->cs_ivlen - 1) | (cr->cr_cs->cs_mblen - 1));
- if (elen && mlen) {
- skip = MIN(eskip, mskip);
- len = MAX(elen + eskip, mlen + mskip) - skip;
- } else if (elen) {
- skip = eskip;
- len = elen;
- } else {
- skip = mskip;
- len = mlen;
- }
-
/* Start first packet in chain */
cesa_start_packet(&cp, MIN(mpsize, len));
@@ -777,16 +791,9 @@ cesa_create_chain_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error)
}
}
-static void
-cesa_create_chain_cb2(void *arg, bus_dma_segment_t *segs, int nseg,
- bus_size_t size, int error)
-{
-
- cesa_create_chain_cb(arg, segs, nseg, error);
-}
-
static int
-cesa_create_chain(struct cesa_softc *sc, struct cesa_request *cr)
+cesa_create_chain(struct cesa_softc *sc,
+ const struct crypto_session_params *csp, struct cesa_request *cr)
{
struct cesa_chain_info cci;
struct cesa_tdma_desc *ctd;
@@ -797,17 +804,17 @@ cesa_create_chain(struct cesa_softc *sc, struct cesa_request *cr)
CESA_LOCK_ASSERT(sc, sessions);
/* Create request metadata */
- if (cr->cr_enc) {
- if (cr->cr_enc->crd_alg == CRYPTO_AES_CBC &&
- (cr->cr_enc->crd_flags & CRD_F_ENCRYPT) == 0)
+ if (csp->csp_cipher_klen != 0) {
+ if (csp->csp_cipher_alg == CRYPTO_AES_CBC &&
+ !CRYPTO_OP_IS_ENCRYPT(cr->cr_crp->crp_op))
memcpy(cr->cr_csd->csd_key, cr->cr_cs->cs_aes_dkey,
- cr->cr_cs->cs_klen);
+ csp->csp_cipher_klen);
else
memcpy(cr->cr_csd->csd_key, cr->cr_cs->cs_key,
- cr->cr_cs->cs_klen);
+ csp->csp_cipher_klen);
}
- if (cr->cr_mac) {
+ if (csp->csp_auth_klen != 0) {
memcpy(cr->cr_csd->csd_hiv_in, cr->cr_cs->cs_hiv_in,
CESA_MAX_HASH_LEN);
memcpy(cr->cr_csd->csd_hiv_out, cr->cr_cs->cs_hiv_out,
@@ -823,37 +830,30 @@ cesa_create_chain(struct cesa_softc *sc, struct cesa_request *cr)
/* Prepare SA configuration */
config = cr->cr_cs->cs_config;
- if (cr->cr_enc && (cr->cr_enc->crd_flags & CRD_F_ENCRYPT) == 0)
+ if (csp->csp_cipher_alg != 0 &&
+ !CRYPTO_OP_IS_ENCRYPT(cr->cr_crp->crp_op))
config |= CESA_CSHD_DECRYPT;
- if (cr->cr_enc && !cr->cr_mac)
+ switch (csp->csp_mode) {
+ case CSP_MODE_CIPHER:
config |= CESA_CSHD_ENC;
- if (!cr->cr_enc && cr->cr_mac)
+ break;
+ case CSP_MODE_DIGEST:
config |= CESA_CSHD_MAC;
- if (cr->cr_enc && cr->cr_mac)
+ break;
+ case CSP_MODE_ETA:
config |= (config & CESA_CSHD_DECRYPT) ? CESA_CSHD_MAC_AND_ENC :
CESA_CSHD_ENC_AND_MAC;
+ break;
+ }
/* Create data packets */
cci.cci_sc = sc;
cci.cci_cr = cr;
- cci.cci_enc = cr->cr_enc;
- cci.cci_mac = cr->cr_mac;
cci.cci_config = config;
cci.cci_error = 0;
- if (cr->cr_crp->crp_flags & CRYPTO_F_IOV)
- error = bus_dmamap_load_uio(sc->sc_data_dtag,
- cr->cr_dmap, (struct uio *)cr->cr_crp->crp_buf,
- cesa_create_chain_cb2, &cci, BUS_DMA_NOWAIT);
- else if (cr->cr_crp->crp_flags & CRYPTO_F_IMBUF)
- error = bus_dmamap_load_mbuf(sc->sc_data_dtag,
- cr->cr_dmap, (struct mbuf *)cr->cr_crp->crp_buf,
- cesa_create_chain_cb2, &cci, BUS_DMA_NOWAIT);
- else
- error = bus_dmamap_load(sc->sc_data_dtag,
- cr->cr_dmap, cr->cr_crp->crp_buf,
- cr->cr_crp->crp_ilen, cesa_create_chain_cb, &cci,
- BUS_DMA_NOWAIT);
+ error = bus_dmamap_load_crp(sc->sc_data_dtag, cr->cr_dmap, cr->cr_crp,
+ cesa_create_chain_cb, &cci, BUS_DMA_NOWAIT);
if (!error)
cr->cr_dmap_loaded = 1;
@@ -1385,18 +1385,6 @@ cesa_attach_late(device_t dev)
goto err8;
}
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
- if (sc->sc_soc_id == MV_DEV_88F6828 ||
- sc->sc_soc_id == MV_DEV_88F6820 ||
- sc->sc_soc_id == MV_DEV_88F6810)
- crypto_register(sc->sc_cid, CRYPTO_SHA2_256_HMAC, 0, 0);
-
return (0);
err8:
for (i = 0; i < CESA_REQUESTS; i++)
@@ -1487,6 +1475,7 @@ cesa_intr(void *arg)
struct cesa_request *cr, *tmp;
struct cesa_softc *sc;
uint32_t ecr, icr;
+ uint8_t hash[HASH_MAX_LEN];
int blocked;
sc = arg;
@@ -1547,11 +1536,19 @@ cesa_intr(void *arg)
BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
cr->cr_crp->crp_etype = sc->sc_error;
- if (cr->cr_mac)
- crypto_copyback(cr->cr_crp->crp_flags,
- cr->cr_crp->crp_buf, cr->cr_mac->crd_inject,
- cr->cr_cs->cs_hlen, cr->cr_csd->csd_hash);
-
+ if (cr->cr_cs->cs_hlen != 0 && cr->cr_crp->crp_etype == 0) {
+ if (cr->cr_crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(cr->cr_crp,
+ cr->cr_crp->crp_digest_start,
+ cr->cr_cs->cs_hlen, hash);
+ if (timingsafe_bcmp(hash, cr->cr_csd->csd_hash,
+ cr->cr_cs->cs_hlen) != 0)
+ cr->cr_crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(cr->cr_crp,
+ cr->cr_crp->crp_digest_start,
+ cr->cr_cs->cs_hlen, cr->cr_csd->csd_hash);
+ }
crypto_done(cr->cr_crp);
cesa_free_request(sc, cr);
}
@@ -1571,42 +1568,98 @@ cesa_intr(void *arg)
crypto_unblock(sc->sc_cid, blocked);
}
-static int
-cesa_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+static bool
+cesa_cipher_supported(const struct crypto_session_params *csp)
{
- struct cesa_session *cs;
- struct cesa_softc *sc;
- struct cryptoini *enc;
- struct cryptoini *mac;
- int error;
-
- sc = device_get_softc(dev);
- enc = NULL;
- mac = NULL;
- error = 0;
- /* Check and parse input */
- if (cesa_is_hash(cri->cri_alg))
- mac = cri;
- else
- enc = cri;
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_DES_CBC:
+ if (csp->csp_ivlen != DES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_3DES_CBC:
+ if (csp->csp_ivlen != DES3_BLOCK_LEN)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+
+ if (csp->csp_cipher_klen > CESA_MAX_KEY_LEN)
+ return (false);
+
+ return (true);
+}
+
+static bool
+cesa_auth_supported(struct cesa_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA2_256_HMAC:
+ if (!(sc->sc_soc_id == MV_DEV_88F6828 ||
+ sc->sc_soc_id == MV_DEV_88F6820 ||
+ sc->sc_soc_id == MV_DEV_88F6810))
+ return (false);
+ /* FALLTHROUGH */
+ case CRYPTO_MD5:
+ case CRYPTO_MD5_HMAC:
+ case CRYPTO_SHA1:
+ case CRYPTO_SHA1_HMAC:
+ break;
+ default:
+ return (false);
+ }
- cri = cri->cri_next;
+ if (csp->csp_auth_klen > CESA_MAX_MKEY_LEN)
+ return (false);
- if (cri) {
- if (!enc && !cesa_is_hash(cri->cri_alg))
- enc = cri;
+ return (true);
+}
- if (!mac && cesa_is_hash(cri->cri_alg))
- mac = cri;
+static int
+cesa_probesession(device_t dev, const struct crypto_session_params *csp)
+{
+ struct cesa_softc *sc;
- if (cri->cri_next || !(enc && mac))
+ sc = device_get_softc(dev);
+ if (csp->csp_flags != 0)
+ return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!cesa_auth_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!cesa_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!cesa_auth_supported(sc, csp) ||
+ !cesa_cipher_supported(csp))
return (EINVAL);
+ break;
+ default:
+ return (EINVAL);
}
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
- if ((enc && (enc->cri_klen / 8) > CESA_MAX_KEY_LEN) ||
- (mac && (mac->cri_klen / 8) > CESA_MAX_MKEY_LEN))
- return (E2BIG);
+static int
+cesa_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct cesa_session *cs;
+ struct cesa_softc *sc;
+ int error;
+
+ sc = device_get_softc(dev);
+ error = 0;
/* Allocate session */
cs = crypto_get_driver_session(cses);
@@ -1616,106 +1669,89 @@ cesa_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
cs->cs_ivlen = 1;
cs->cs_mblen = 1;
- if (enc) {
- switch (enc->cri_alg) {
- case CRYPTO_AES_CBC:
- cs->cs_config |= CESA_CSHD_AES | CESA_CSHD_CBC;
- cs->cs_ivlen = AES_BLOCK_LEN;
- break;
- case CRYPTO_DES_CBC:
- cs->cs_config |= CESA_CSHD_DES | CESA_CSHD_CBC;
- cs->cs_ivlen = DES_BLOCK_LEN;
- break;
- case CRYPTO_3DES_CBC:
- cs->cs_config |= CESA_CSHD_3DES | CESA_CSHD_3DES_EDE |
- CESA_CSHD_CBC;
- cs->cs_ivlen = DES3_BLOCK_LEN;
- break;
- default:
- error = EINVAL;
- break;
- }
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ cs->cs_config |= CESA_CSHD_AES | CESA_CSHD_CBC;
+ cs->cs_ivlen = AES_BLOCK_LEN;
+ break;
+ case CRYPTO_DES_CBC:
+ cs->cs_config |= CESA_CSHD_DES | CESA_CSHD_CBC;
+ cs->cs_ivlen = DES_BLOCK_LEN;
+ break;
+ case CRYPTO_3DES_CBC:
+ cs->cs_config |= CESA_CSHD_3DES | CESA_CSHD_3DES_EDE |
+ CESA_CSHD_CBC;
+ cs->cs_ivlen = DES3_BLOCK_LEN;
+ break;
}
- if (!error && mac) {
- switch (mac->cri_alg) {
- case CRYPTO_MD5:
- cs->cs_mblen = 1;
- cs->cs_hlen = (mac->cri_mlen == 0) ? MD5_HASH_LEN :
- mac->cri_mlen;
- cs->cs_config |= CESA_CSHD_MD5;
- break;
- case CRYPTO_MD5_HMAC:
- cs->cs_mblen = MD5_BLOCK_LEN;
- cs->cs_hlen = (mac->cri_mlen == 0) ? MD5_HASH_LEN :
- mac->cri_mlen;
- cs->cs_config |= CESA_CSHD_MD5_HMAC;
- if (cs->cs_hlen == CESA_HMAC_TRUNC_LEN)
- cs->cs_config |= CESA_CSHD_96_BIT_HMAC;
- break;
- case CRYPTO_SHA1:
- cs->cs_mblen = 1;
- cs->cs_hlen = (mac->cri_mlen == 0) ? SHA1_HASH_LEN :
- mac->cri_mlen;
- cs->cs_config |= CESA_CSHD_SHA1;
- break;
- case CRYPTO_SHA1_HMAC:
- cs->cs_mblen = SHA1_BLOCK_LEN;
- cs->cs_hlen = (mac->cri_mlen == 0) ? SHA1_HASH_LEN :
- mac->cri_mlen;
- cs->cs_config |= CESA_CSHD_SHA1_HMAC;
- if (cs->cs_hlen == CESA_HMAC_TRUNC_LEN)
- cs->cs_config |= CESA_CSHD_96_BIT_HMAC;
- break;
- case CRYPTO_SHA2_256_HMAC:
- cs->cs_mblen = SHA2_256_BLOCK_LEN;
- cs->cs_hlen = (mac->cri_mlen == 0) ? SHA2_256_HASH_LEN :
- mac->cri_mlen;
- cs->cs_config |= CESA_CSHD_SHA2_256_HMAC;
- break;
- default:
- error = EINVAL;
- break;
- }
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5:
+ cs->cs_mblen = 1;
+ cs->cs_hlen = (csp->csp_auth_mlen == 0) ? MD5_HASH_LEN :
+ csp->csp_auth_mlen;
+ cs->cs_config |= CESA_CSHD_MD5;
+ break;
+ case CRYPTO_MD5_HMAC:
+ cs->cs_mblen = MD5_BLOCK_LEN;
+ cs->cs_hlen = (csp->csp_auth_mlen == 0) ? MD5_HASH_LEN :
+ csp->csp_auth_mlen;
+ cs->cs_config |= CESA_CSHD_MD5_HMAC;
+ if (cs->cs_hlen == CESA_HMAC_TRUNC_LEN)
+ cs->cs_config |= CESA_CSHD_96_BIT_HMAC;
+ break;
+ case CRYPTO_SHA1:
+ cs->cs_mblen = 1;
+ cs->cs_hlen = (csp->csp_auth_mlen == 0) ? SHA1_HASH_LEN :
+ csp->csp_auth_mlen;
+ cs->cs_config |= CESA_CSHD_SHA1;
+ break;
+ case CRYPTO_SHA1_HMAC:
+ cs->cs_mblen = SHA1_BLOCK_LEN;
+ cs->cs_hlen = (csp->csp_auth_mlen == 0) ? SHA1_HASH_LEN :
+ csp->csp_auth_mlen;
+ cs->cs_config |= CESA_CSHD_SHA1_HMAC;
+ if (cs->cs_hlen == CESA_HMAC_TRUNC_LEN)
+ cs->cs_config |= CESA_CSHD_96_BIT_HMAC;
+ break;
+ case CRYPTO_SHA2_256_HMAC:
+ cs->cs_mblen = SHA2_256_BLOCK_LEN;
+ cs->cs_hlen = (csp->csp_auth_mlen == 0) ? SHA2_256_HASH_LEN :
+ csp->csp_auth_mlen;
+ cs->cs_config |= CESA_CSHD_SHA2_256_HMAC;
+ break;
}
/* Save cipher key */
- if (!error && enc && enc->cri_key) {
- cs->cs_klen = enc->cri_klen / 8;
- memcpy(cs->cs_key, enc->cri_key, cs->cs_klen);
- if (enc->cri_alg == CRYPTO_AES_CBC)
- error = cesa_prep_aes_key(cs);
+ if (csp->csp_cipher_key != NULL) {
+ memcpy(cs->cs_key, csp->csp_cipher_key,
+ csp->csp_cipher_klen);
+ if (csp->csp_cipher_alg == CRYPTO_AES_CBC)
+ error = cesa_prep_aes_key(cs, csp);
}
/* Save digest key */
- if (!error && mac && mac->cri_key)
- error = cesa_set_mkey(cs, mac->cri_alg, mac->cri_key,
- mac->cri_klen / 8);
+ if (csp->csp_auth_key != NULL)
+ cesa_set_mkey(cs, csp->csp_auth_alg, csp->csp_auth_key,
+ csp->csp_auth_klen);
- if (error)
- return (error);
-
- return (0);
+ return (error);
}
static int
cesa_process(device_t dev, struct cryptop *crp, int hint)
{
+ const struct crypto_session_params *csp;
struct cesa_request *cr;
struct cesa_session *cs;
- struct cryptodesc *crd;
- struct cryptodesc *enc;
- struct cryptodesc *mac;
struct cesa_softc *sc;
int error;
sc = device_get_softc(dev);
- crd = crp->crp_desc;
- enc = NULL;
- mac = NULL;
error = 0;
cs = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
/* Check and parse input */
if (crp->crp_ilen > CESA_MAX_REQUEST_SIZE) {
@@ -1724,25 +1760,16 @@ cesa_process(device_t dev, struct cryptop *crp, int hint)
return (0);
}
- if (cesa_is_hash(crd->crd_alg))
- mac = crd;
- else
- enc = crd;
-
- crd = crd->crd_next;
-
- if (crd) {
- if (!enc && !cesa_is_hash(crd->crd_alg))
- enc = crd;
-
- if (!mac && cesa_is_hash(crd->crd_alg))
- mac = crd;
-
- if (crd->crd_next || !(enc && mac)) {
- crp->crp_etype = EINVAL;
- crypto_done(crp);
- return (0);
- }
+ /*
+ * For requests with AAD, only requests where the AAD is
+ * immediately adjacent to the payload are supported.
+ */
+ if (crp->crp_aad_length != 0 &&
+ (crp->crp_aad_start + crp->crp_aad_length) !=
+ crp->crp_payload_start) {
+ crp->crp_etype = EINVAL;
+ crypto_done(crp);
+ return (0);
}
/*
@@ -1759,51 +1786,37 @@ cesa_process(device_t dev, struct cryptop *crp, int hint)
/* Prepare request */
cr->cr_crp = crp;
- cr->cr_enc = enc;
- cr->cr_mac = mac;
cr->cr_cs = cs;
CESA_LOCK(sc, sessions);
cesa_sync_desc(sc, BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
- if (enc && enc->crd_flags & CRD_F_ENCRYPT) {
- if (enc->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(cr->cr_csd->csd_iv, enc->crd_iv, cs->cs_ivlen);
- else
- arc4rand(cr->cr_csd->csd_iv, cs->cs_ivlen, 0);
-
- if ((enc->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enc->crd_inject, cs->cs_ivlen, cr->cr_csd->csd_iv);
- } else if (enc) {
- if (enc->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(cr->cr_csd->csd_iv, enc->crd_iv, cs->cs_ivlen);
+ if (csp->csp_cipher_alg != 0) {
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(cr->cr_csd->csd_iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen,
+ cr->cr_csd->csd_iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(cr->cr_csd->csd_iv, crp->crp_iv, csp->csp_ivlen);
else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enc->crd_inject, cs->cs_ivlen, cr->cr_csd->csd_iv);
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen,
+ cr->cr_csd->csd_iv);
}
- if (enc && enc->crd_flags & CRD_F_KEY_EXPLICIT) {
- if ((enc->crd_klen / 8) <= CESA_MAX_KEY_LEN) {
- cs->cs_klen = enc->crd_klen / 8;
- memcpy(cs->cs_key, enc->crd_key, cs->cs_klen);
- if (enc->crd_alg == CRYPTO_AES_CBC)
- error = cesa_prep_aes_key(cs);
- } else
- error = E2BIG;
+ if (crp->crp_cipher_key != NULL) {
+ memcpy(cs->cs_key, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
+ if (csp->csp_cipher_alg == CRYPTO_AES_CBC)
+ error = cesa_prep_aes_key(cs, csp);
}
- if (!error && mac && mac->crd_flags & CRD_F_KEY_EXPLICIT) {
- if ((mac->crd_klen / 8) <= CESA_MAX_MKEY_LEN)
- error = cesa_set_mkey(cs, mac->crd_alg, mac->crd_key,
- mac->crd_klen / 8);
- else
- error = E2BIG;
- }
+ if (!error && crp->crp_auth_key != NULL)
+ cesa_set_mkey(cs, csp->csp_auth_alg, crp->crp_auth_key,
+ csp->csp_auth_klen);
/* Convert request to chain of TDMA and SA descriptors */
if (!error)
- error = cesa_create_chain(sc, cr);
+ error = cesa_create_chain(sc, csp, cr);
cesa_sync_desc(sc, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
CESA_UNLOCK(sc, sessions);
diff --git a/sys/dev/cesa/cesa.h b/sys/dev/cesa/cesa.h
index 9fa35b89b18f..449f4ecce5b2 100644
--- a/sys/dev/cesa/cesa.h
+++ b/sys/dev/cesa/cesa.h
@@ -194,7 +194,6 @@ struct cesa_sa_desc {
struct cesa_session {
uint32_t cs_config;
- unsigned int cs_klen;
unsigned int cs_ivlen;
unsigned int cs_hlen;
unsigned int cs_mblen;
@@ -208,8 +207,6 @@ struct cesa_request {
struct cesa_sa_data *cr_csd;
bus_addr_t cr_csd_paddr;
struct cryptop *cr_crp;
- struct cryptodesc *cr_enc;
- struct cryptodesc *cr_mac;
struct cesa_session *cr_cs;
bus_dmamap_t cr_dmap;
int cr_dmap_loaded;
@@ -272,8 +269,6 @@ struct cesa_softc {
struct cesa_chain_info {
struct cesa_softc *cci_sc;
struct cesa_request *cci_cr;
- struct cryptodesc *cci_enc;
- struct cryptodesc *cci_mac;
uint32_t cci_config;
int cci_error;
};
diff --git a/sys/dev/cxgbe/adapter.h b/sys/dev/cxgbe/adapter.h
index fd5487b01179..1f3ccedca9b3 100644
--- a/sys/dev/cxgbe/adapter.h
+++ b/sys/dev/cxgbe/adapter.h
@@ -1204,7 +1204,7 @@ union authctx;
void t4_aes_getdeckey(void *, const void *, unsigned int);
void t4_copy_partial_hash(int, union authctx *, void *);
void t4_init_gmac_hash(const char *, int, char *);
-void t4_init_hmac_digest(struct auth_hash *, u_int, char *, int, char *);
+void t4_init_hmac_digest(struct auth_hash *, u_int, const char *, int, char *);
#ifdef DEV_NETMAP
/* t4_netmap.c */
diff --git a/sys/dev/cxgbe/crypto/t4_crypto.c b/sys/dev/cxgbe/crypto/t4_crypto.c
index 5b924125b6e4..5ab8048ece4d 100644
--- a/sys/dev/cxgbe/crypto/t4_crypto.c
+++ b/sys/dev/cxgbe/crypto/t4_crypto.c
@@ -165,7 +165,7 @@ struct ccr_session_blkcipher {
struct ccr_session {
bool active;
int pending;
- enum { HASH, HMAC, BLKCIPHER, AUTHENC, GCM, CCM } mode;
+ enum { HASH, HMAC, BLKCIPHER, ETA, GCM, CCM } mode;
union {
struct ccr_session_hmac hmac;
struct ccr_session_gmac gmac;
@@ -208,8 +208,8 @@ struct ccr_softc {
uint64_t stats_blkcipher_decrypt;
uint64_t stats_hash;
uint64_t stats_hmac;
- uint64_t stats_authenc_encrypt;
- uint64_t stats_authenc_decrypt;
+ uint64_t stats_eta_encrypt;
+ uint64_t stats_eta_decrypt;
uint64_t stats_gcm_encrypt;
uint64_t stats_gcm_decrypt;
uint64_t stats_ccm_encrypt;
@@ -230,9 +230,9 @@ struct ccr_softc {
* Non-hash-only requests require a PHYS_DSGL that describes the
* location to store the results of the encryption or decryption
* operation. This SGL uses a different format (PHYS_DSGL) and should
- * exclude the crd_skip bytes at the start of the data as well as
- * any AAD or IV. For authenticated encryption requests it should
- * cover include the destination of the hash or tag.
+ * exclude the skip bytes at the start of the data as well as any AAD
+ * or IV. For authenticated encryption requests it should include the
+ * destination of the hash or tag.
*
* The input payload may either be supplied inline as immediate data,
* or via a standard ULP_TX SGL. This SGL should include AAD,
@@ -251,12 +251,19 @@ ccr_populate_sglist(struct sglist *sg, struct cryptop *crp)
int error;
sglist_reset(sg);
- if (crp->crp_flags & CRYPTO_F_IMBUF)
- error = sglist_append_mbuf(sg, (struct mbuf *)crp->crp_buf);
- else if (crp->crp_flags & CRYPTO_F_IOV)
- error = sglist_append_uio(sg, (struct uio *)crp->crp_buf);
- else
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ error = sglist_append_mbuf(sg, crp->crp_mbuf);
+ break;
+ case CRYPTO_BUF_UIO:
+ error = sglist_append_uio(sg, crp->crp_uio);
+ break;
+ case CRYPTO_BUF_CONTIG:
error = sglist_append(sg, crp->crp_buf, crp->crp_ilen);
+ break;
+ default:
+ error = EINVAL;
+ }
return (error);
}
@@ -436,16 +443,13 @@ ccr_hash(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
struct chcr_wr *crwr;
struct wrqe *wr;
struct auth_hash *axf;
- struct cryptodesc *crd;
char *dst;
u_int hash_size_in_response, kctx_flits, kctx_len, transhdr_len, wr_len;
u_int hmac_ctrl, imm_len, iopad_size;
int error, sgl_nsegs, sgl_len, use_opad;
- crd = crp->crp_desc;
-
/* Reject requests with too large of an input buffer. */
- if (crd->crd_len > MAX_REQUEST_SIZE)
+ if (crp->crp_payload_length > MAX_REQUEST_SIZE)
return (EFBIG);
axf = s->hmac.auth_hash;
@@ -471,19 +475,19 @@ ccr_hash(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
hash_size_in_response = axf->hashsize;
transhdr_len = HASH_TRANSHDR_SIZE(kctx_len);
- if (crd->crd_len == 0) {
+ if (crp->crp_payload_length == 0) {
imm_len = axf->blocksize;
sgl_nsegs = 0;
sgl_len = 0;
- } else if (ccr_use_imm_data(transhdr_len, crd->crd_len)) {
- imm_len = crd->crd_len;
+ } else if (ccr_use_imm_data(transhdr_len, crp->crp_payload_length)) {
+ imm_len = crp->crp_payload_length;
sgl_nsegs = 0;
sgl_len = 0;
} else {
imm_len = 0;
sglist_reset(sc->sg_ulptx);
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crd->crd_skip, crd->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
sgl_nsegs = sc->sg_ulptx->sg_nseg;
@@ -512,8 +516,8 @@ ccr_hash(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_CPL_TX_SEC_PDU_CPLLEN(2) | V_CPL_TX_SEC_PDU_PLACEHOLDER(0) |
V_CPL_TX_SEC_PDU_IVINSRTOFST(0));
- crwr->sec_cpl.pldlen = htobe32(crd->crd_len == 0 ? axf->blocksize :
- crd->crd_len);
+ crwr->sec_cpl.pldlen = htobe32(crp->crp_payload_length == 0 ?
+ axf->blocksize : crp->crp_payload_length);
crwr->sec_cpl.cipherstop_lo_authinsert = htobe32(
V_CPL_TX_SEC_PDU_AUTHSTART(1) | V_CPL_TX_SEC_PDU_AUTHSTOP(0));
@@ -527,7 +531,8 @@ ccr_hash(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_SCMD_HMAC_CTRL(hmac_ctrl));
crwr->sec_cpl.ivgen_hdrlen = htobe32(
V_SCMD_LAST_FRAG(0) |
- V_SCMD_MORE_FRAGS(crd->crd_len == 0 ? 1 : 0) | V_SCMD_MAC_ONLY(1));
+ V_SCMD_MORE_FRAGS(crp->crp_payload_length == 0 ? 1 : 0) |
+ V_SCMD_MAC_ONLY(1));
memcpy(crwr->key_ctx.key, s->hmac.pads, kctx_len);
@@ -540,14 +545,14 @@ ccr_hash(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_KEY_CONTEXT_MK_SIZE(s->hmac.mk_size) | V_KEY_CONTEXT_VALID(1));
dst = (char *)(crwr + 1) + kctx_len + DUMMY_BYTES;
- if (crd->crd_len == 0) {
+ if (crp->crp_payload_length == 0) {
dst[0] = 0x80;
if (s->mode == HMAC)
*(uint64_t *)(dst + axf->blocksize - sizeof(uint64_t)) =
htobe64(axf->blocksize << 3);
} else if (imm_len != 0)
- crypto_copydata(crp->crp_flags, crp->crp_buf, crd->crd_skip,
- crd->crd_len, dst);
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, dst);
else
ccr_write_ulptx_sgl(sc, dst, sgl_nsegs);
@@ -561,15 +566,20 @@ static int
ccr_hash_done(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
const struct cpl_fw6_pld *cpl, int error)
{
- struct cryptodesc *crd;
+ uint8_t hash[HASH_MAX_LEN];
- crd = crp->crp_desc;
- if (error == 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject,
- s->hmac.hash_len, (c_caddr_t)(cpl + 1));
- }
+ if (error)
+ return (error);
- return (error);
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, s->hmac.hash_len,
+ hash);
+ if (timingsafe_bcmp((cpl + 1), hash, s->hmac.hash_len) != 0)
+ return (EBADMSG);
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, s->hmac.hash_len,
+ (cpl + 1));
+ return (0);
}
static int
@@ -578,34 +588,31 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
char iv[CHCR_MAX_CRYPTO_IV_LEN];
struct chcr_wr *crwr;
struct wrqe *wr;
- struct cryptodesc *crd;
char *dst;
u_int kctx_len, key_half, op_type, transhdr_len, wr_len;
- u_int imm_len;
+ u_int imm_len, iv_len;
int dsgl_nsegs, dsgl_len;
int sgl_nsegs, sgl_len;
int error;
- crd = crp->crp_desc;
-
- if (s->blkcipher.key_len == 0 || crd->crd_len == 0)
+ if (s->blkcipher.key_len == 0 || crp->crp_payload_length == 0)
return (EINVAL);
- if (crd->crd_alg == CRYPTO_AES_CBC &&
- (crd->crd_len % AES_BLOCK_LEN) != 0)
+ if (s->blkcipher.cipher_mode == SCMD_CIPH_MODE_AES_CBC &&
+ (crp->crp_payload_length % AES_BLOCK_LEN) != 0)
return (EINVAL);
/* Reject requests with too large of an input buffer. */
- if (crd->crd_len > MAX_REQUEST_SIZE)
+ if (crp->crp_payload_length > MAX_REQUEST_SIZE)
return (EFBIG);
- if (crd->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
op_type = CHCR_ENCRYPT_OP;
else
op_type = CHCR_DECRYPT_OP;
sglist_reset(sc->sg_dsgl);
- error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp, crd->crd_skip,
- crd->crd_len);
+ error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
dsgl_nsegs = ccr_count_sgl(sc->sg_dsgl, DSGL_SGE_MAXLEN);
@@ -617,23 +624,28 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
kctx_len = roundup2(s->blkcipher.key_len, 16);
transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dsgl_len);
- if (ccr_use_imm_data(transhdr_len, crd->crd_len +
- s->blkcipher.iv_len)) {
- imm_len = crd->crd_len;
+ /* For AES-XTS we send a 16-byte IV in the work request. */
+ if (s->blkcipher.cipher_mode == SCMD_CIPH_MODE_AES_XTS)
+ iv_len = AES_BLOCK_LEN;
+ else
+ iv_len = s->blkcipher.iv_len;
+
+ if (ccr_use_imm_data(transhdr_len, crp->crp_payload_length + iv_len)) {
+ imm_len = crp->crp_payload_length;
sgl_nsegs = 0;
sgl_len = 0;
} else {
imm_len = 0;
sglist_reset(sc->sg_ulptx);
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crd->crd_skip, crd->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
sgl_nsegs = sc->sg_ulptx->sg_nseg;
sgl_len = ccr_ulptx_sgl_len(sgl_nsegs);
}
- wr_len = roundup2(transhdr_len, 16) + s->blkcipher.iv_len +
+ wr_len = roundup2(transhdr_len, 16) + iv_len +
roundup2(imm_len, 16) + sgl_len;
if (wr_len > SGE_MAX_WR_LEN)
return (EFBIG);
@@ -647,24 +659,20 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
/*
* Read the existing IV from the request or generate a random
- * one if none is provided. Optionally copy the generated IV
- * into the output buffer if requested.
+ * one if none is provided.
*/
- if (op_type == CHCR_ENCRYPT_OP) {
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crd->crd_iv, s->blkcipher.iv_len);
- else
- arc4rand(iv, s->blkcipher.iv_len, 0);
- if ((crd->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, s->blkcipher.iv_len, iv);
- } else {
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crd->crd_iv, s->blkcipher.iv_len);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, s->blkcipher.iv_len, iv);
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, s->blkcipher.iv_len, 0);
+ crypto_copyback(crp, crp->crp_iv_start, s->blkcipher.iv_len,
+ iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, s->blkcipher.iv_len);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, s->blkcipher.iv_len,
+ iv);
+
+ /* Zero the remainder of the IV for AES-XTS. */
+ memset(iv + s->blkcipher.iv_len, 0, iv_len - s->blkcipher.iv_len);
ccr_populate_wreq(sc, crwr, kctx_len, wr_len, imm_len, sgl_len, 0,
crp);
@@ -677,10 +685,10 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_CPL_TX_SEC_PDU_CPLLEN(2) | V_CPL_TX_SEC_PDU_PLACEHOLDER(0) |
V_CPL_TX_SEC_PDU_IVINSRTOFST(1));
- crwr->sec_cpl.pldlen = htobe32(s->blkcipher.iv_len + crd->crd_len);
+ crwr->sec_cpl.pldlen = htobe32(iv_len + crp->crp_payload_length);
crwr->sec_cpl.aadstart_cipherstop_hi = htobe32(
- V_CPL_TX_SEC_PDU_CIPHERSTART(s->blkcipher.iv_len + 1) |
+ V_CPL_TX_SEC_PDU_CIPHERSTART(iv_len + 1) |
V_CPL_TX_SEC_PDU_CIPHERSTOP_HI(0));
crwr->sec_cpl.cipherstop_lo_authinsert = htobe32(
V_CPL_TX_SEC_PDU_CIPHERSTOP_LO(0));
@@ -693,7 +701,7 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_SCMD_CIPH_MODE(s->blkcipher.cipher_mode) |
V_SCMD_AUTH_MODE(SCMD_AUTH_MODE_NOP) |
V_SCMD_HMAC_CTRL(SCMD_HMAC_CTRL_NOP) |
- V_SCMD_IV_SIZE(s->blkcipher.iv_len / 2) |
+ V_SCMD_IV_SIZE(iv_len / 2) |
V_SCMD_NUM_IVS(0));
crwr->sec_cpl.ivgen_hdrlen = htobe32(
V_SCMD_IV_GEN_CTRL(0) |
@@ -701,24 +709,24 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
V_SCMD_AADIVDROP(1) | V_SCMD_HDR_LEN(dsgl_len));
crwr->key_ctx.ctx_hdr = s->blkcipher.key_ctx_hdr;
- switch (crd->crd_alg) {
- case CRYPTO_AES_CBC:
- if (crd->crd_flags & CRD_F_ENCRYPT)
+ switch (s->blkcipher.cipher_mode) {
+ case SCMD_CIPH_MODE_AES_CBC:
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
memcpy(crwr->key_ctx.key, s->blkcipher.enckey,
s->blkcipher.key_len);
else
memcpy(crwr->key_ctx.key, s->blkcipher.deckey,
s->blkcipher.key_len);
break;
- case CRYPTO_AES_ICM:
+ case SCMD_CIPH_MODE_AES_CTR:
memcpy(crwr->key_ctx.key, s->blkcipher.enckey,
s->blkcipher.key_len);
break;
- case CRYPTO_AES_XTS:
+ case SCMD_CIPH_MODE_AES_XTS:
key_half = s->blkcipher.key_len / 2;
memcpy(crwr->key_ctx.key, s->blkcipher.enckey + key_half,
key_half);
- if (crd->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
memcpy(crwr->key_ctx.key + key_half,
s->blkcipher.enckey, key_half);
else
@@ -730,11 +738,11 @@ ccr_blkcipher(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
dst = (char *)(crwr + 1) + kctx_len;
ccr_write_phys_dsgl(sc, dst, dsgl_nsegs);
dst += sizeof(struct cpl_rx_phys_dsgl) + dsgl_len;
- memcpy(dst, iv, s->blkcipher.iv_len);
- dst += s->blkcipher.iv_len;
+ memcpy(dst, iv, iv_len);
+ dst += iv_len;
if (imm_len != 0)
- crypto_copydata(crp->crp_flags, crp->crp_buf, crd->crd_skip,
- crd->crd_len, dst);
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, dst);
else
ccr_write_ulptx_sgl(sc, dst, sgl_nsegs);
@@ -775,8 +783,7 @@ ccr_hmac_ctrl(unsigned int hashsize, unsigned int authsize)
}
static int
-ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccr_eta(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
{
char iv[CHCR_MAX_CRYPTO_IV_LEN];
struct chcr_wr *crwr;
@@ -784,9 +791,9 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
struct auth_hash *axf;
char *dst;
u_int kctx_len, key_half, op_type, transhdr_len, wr_len;
- u_int hash_size_in_response, imm_len, iopad_size;
- u_int aad_start, aad_len, aad_stop;
- u_int auth_start, auth_stop, auth_insert;
+ u_int hash_size_in_response, imm_len, iopad_size, iv_len;
+ u_int aad_start, aad_stop;
+ u_int auth_insert;
u_int cipher_start, cipher_stop;
u_int hmac_ctrl, input_len;
int dsgl_nsegs, dsgl_len;
@@ -797,34 +804,24 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* If there is a need in the future, requests with an empty
* payload could be supported as HMAC-only requests.
*/
- if (s->blkcipher.key_len == 0 || crde->crd_len == 0)
+ if (s->blkcipher.key_len == 0 || crp->crp_payload_length == 0)
return (EINVAL);
- if (crde->crd_alg == CRYPTO_AES_CBC &&
- (crde->crd_len % AES_BLOCK_LEN) != 0)
+ if (s->blkcipher.cipher_mode == SCMD_CIPH_MODE_AES_CBC &&
+ (crp->crp_payload_length % AES_BLOCK_LEN) != 0)
return (EINVAL);
- /*
- * Compute the length of the AAD (data covered by the
- * authentication descriptor but not the encryption
- * descriptor). To simplify the logic, AAD is only permitted
- * before the cipher/plain text, not after. This is true of
- * all currently-generated requests.
- */
- if (crda->crd_len + crda->crd_skip > crde->crd_len + crde->crd_skip)
- return (EINVAL);
- if (crda->crd_skip < crde->crd_skip) {
- if (crda->crd_skip + crda->crd_len > crde->crd_skip)
- aad_len = (crde->crd_skip - crda->crd_skip);
- else
- aad_len = crda->crd_len;
- } else
- aad_len = 0;
- if (aad_len + s->blkcipher.iv_len > MAX_AAD_LEN)
+ /* For AES-XTS we send a 16-byte IV in the work request. */
+ if (s->blkcipher.cipher_mode == SCMD_CIPH_MODE_AES_XTS)
+ iv_len = AES_BLOCK_LEN;
+ else
+ iv_len = s->blkcipher.iv_len;
+
+ if (crp->crp_aad_length + iv_len > MAX_AAD_LEN)
return (EINVAL);
axf = s->hmac.auth_hash;
hash_size_in_response = s->hmac.hash_len;
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
op_type = CHCR_ENCRYPT_OP;
else
op_type = CHCR_DECRYPT_OP;
@@ -839,26 +836,26 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* output buffer.
*/
if (op_type == CHCR_ENCRYPT_OP) {
- if (s->blkcipher.iv_len + aad_len + crde->crd_len +
+ if (iv_len + crp->crp_aad_length + crp->crp_payload_length +
hash_size_in_response > MAX_REQUEST_SIZE)
return (EFBIG);
} else {
- if (s->blkcipher.iv_len + aad_len + crde->crd_len >
+ if (iv_len + crp->crp_aad_length + crp->crp_payload_length >
MAX_REQUEST_SIZE)
return (EFBIG);
}
sglist_reset(sc->sg_dsgl);
error = sglist_append_sglist(sc->sg_dsgl, sc->sg_iv_aad, 0,
- s->blkcipher.iv_len + aad_len);
+ iv_len + crp->crp_aad_length);
if (error)
return (error);
- error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp, crde->crd_skip,
- crde->crd_len);
+ error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_ENCRYPT_OP) {
error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -888,7 +885,7 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* inside of the AAD region, so a second copy is always
* required.
*/
- input_len = aad_len + crde->crd_len;
+ input_len = crp->crp_aad_length + crp->crp_payload_length;
/*
* The firmware hangs if sent a request which is a
@@ -902,26 +899,27 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
return (EFBIG);
if (op_type == CHCR_DECRYPT_OP)
input_len += hash_size_in_response;
- if (ccr_use_imm_data(transhdr_len, s->blkcipher.iv_len + input_len)) {
+
+ if (ccr_use_imm_data(transhdr_len, iv_len + input_len)) {
imm_len = input_len;
sgl_nsegs = 0;
sgl_len = 0;
} else {
imm_len = 0;
sglist_reset(sc->sg_ulptx);
- if (aad_len != 0) {
+ if (crp->crp_aad_length != 0) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_skip, aad_len);
+ crp->crp_aad_start, crp->crp_aad_length);
if (error)
return (error);
}
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crde->crd_skip, crde->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_DECRYPT_OP) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -934,37 +932,25 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* Auth-data that overlaps with the cipher region is placed in
* the auth section.
*/
- if (aad_len != 0) {
- aad_start = s->blkcipher.iv_len + 1;
- aad_stop = aad_start + aad_len - 1;
+ if (crp->crp_aad_length != 0) {
+ aad_start = iv_len + 1;
+ aad_stop = aad_start + crp->crp_aad_length - 1;
} else {
aad_start = 0;
aad_stop = 0;
}
- cipher_start = s->blkcipher.iv_len + aad_len + 1;
+ cipher_start = iv_len + crp->crp_aad_length + 1;
if (op_type == CHCR_DECRYPT_OP)
cipher_stop = hash_size_in_response;
else
cipher_stop = 0;
- if (aad_len == crda->crd_len) {
- auth_start = 0;
- auth_stop = 0;
- } else {
- if (aad_len != 0)
- auth_start = cipher_start;
- else
- auth_start = s->blkcipher.iv_len + crda->crd_skip -
- crde->crd_skip + 1;
- auth_stop = (crde->crd_skip + crde->crd_len) -
- (crda->crd_skip + crda->crd_len) + cipher_stop;
- }
if (op_type == CHCR_DECRYPT_OP)
auth_insert = hash_size_in_response;
else
auth_insert = 0;
- wr_len = roundup2(transhdr_len, 16) + s->blkcipher.iv_len +
- roundup2(imm_len, 16) + sgl_len;
+ wr_len = roundup2(transhdr_len, 16) + iv_len + roundup2(imm_len, 16) +
+ sgl_len;
if (wr_len > SGE_MAX_WR_LEN)
return (EFBIG);
wr = alloc_wrqe(wr_len, sc->txq);
@@ -977,24 +963,20 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
/*
* Read the existing IV from the request or generate a random
- * one if none is provided. Optionally copy the generated IV
- * into the output buffer if requested.
+ * one if none is provided.
*/
- if (op_type == CHCR_ENCRYPT_OP) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, s->blkcipher.iv_len);
- else
- arc4rand(iv, s->blkcipher.iv_len, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, s->blkcipher.iv_len, iv);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, s->blkcipher.iv_len);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, s->blkcipher.iv_len, iv);
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv, s->blkcipher.iv_len, 0);
+ crypto_copyback(crp, crp->crp_iv_start, s->blkcipher.iv_len,
+ iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(iv, crp->crp_iv, s->blkcipher.iv_len);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, s->blkcipher.iv_len,
+ iv);
+
+ /* Zero the remainder of the IV for AES-XTS. */
+ memset(iv + s->blkcipher.iv_len, 0, iv_len - s->blkcipher.iv_len);
ccr_populate_wreq(sc, crwr, kctx_len, wr_len, imm_len, sgl_len,
op_type == CHCR_DECRYPT_OP ? hash_size_in_response : 0, crp);
@@ -1007,7 +989,7 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
V_CPL_TX_SEC_PDU_CPLLEN(2) | V_CPL_TX_SEC_PDU_PLACEHOLDER(0) |
V_CPL_TX_SEC_PDU_IVINSRTOFST(1));
- crwr->sec_cpl.pldlen = htobe32(s->blkcipher.iv_len + input_len);
+ crwr->sec_cpl.pldlen = htobe32(iv_len + input_len);
crwr->sec_cpl.aadstart_cipherstop_hi = htobe32(
V_CPL_TX_SEC_PDU_AADSTART(aad_start) |
@@ -1016,8 +998,8 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
V_CPL_TX_SEC_PDU_CIPHERSTOP_HI(cipher_stop >> 4));
crwr->sec_cpl.cipherstop_lo_authinsert = htobe32(
V_CPL_TX_SEC_PDU_CIPHERSTOP_LO(cipher_stop & 0xf) |
- V_CPL_TX_SEC_PDU_AUTHSTART(auth_start) |
- V_CPL_TX_SEC_PDU_AUTHSTOP(auth_stop) |
+ V_CPL_TX_SEC_PDU_AUTHSTART(cipher_start) |
+ V_CPL_TX_SEC_PDU_AUTHSTOP(cipher_stop) |
V_CPL_TX_SEC_PDU_AUTHINSERT(auth_insert));
/* These two flits are actually a CPL_TLS_TX_SCMD_FMT. */
@@ -1030,7 +1012,7 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
V_SCMD_CIPH_MODE(s->blkcipher.cipher_mode) |
V_SCMD_AUTH_MODE(s->hmac.auth_mode) |
V_SCMD_HMAC_CTRL(hmac_ctrl) |
- V_SCMD_IV_SIZE(s->blkcipher.iv_len / 2) |
+ V_SCMD_IV_SIZE(iv_len / 2) |
V_SCMD_NUM_IVS(0));
crwr->sec_cpl.ivgen_hdrlen = htobe32(
V_SCMD_IV_GEN_CTRL(0) |
@@ -1038,24 +1020,24 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
V_SCMD_AADIVDROP(0) | V_SCMD_HDR_LEN(dsgl_len));
crwr->key_ctx.ctx_hdr = s->blkcipher.key_ctx_hdr;
- switch (crde->crd_alg) {
- case CRYPTO_AES_CBC:
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ switch (s->blkcipher.cipher_mode) {
+ case SCMD_CIPH_MODE_AES_CBC:
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
memcpy(crwr->key_ctx.key, s->blkcipher.enckey,
s->blkcipher.key_len);
else
memcpy(crwr->key_ctx.key, s->blkcipher.deckey,
s->blkcipher.key_len);
break;
- case CRYPTO_AES_ICM:
+ case SCMD_CIPH_MODE_AES_CTR:
memcpy(crwr->key_ctx.key, s->blkcipher.enckey,
s->blkcipher.key_len);
break;
- case CRYPTO_AES_XTS:
+ case SCMD_CIPH_MODE_AES_XTS:
key_half = s->blkcipher.key_len / 2;
memcpy(crwr->key_ctx.key, s->blkcipher.enckey + key_half,
key_half);
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
memcpy(crwr->key_ctx.key + key_half,
s->blkcipher.enckey, key_half);
else
@@ -1070,20 +1052,20 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
dst = (char *)(crwr + 1) + kctx_len;
ccr_write_phys_dsgl(sc, dst, dsgl_nsegs);
dst += sizeof(struct cpl_rx_phys_dsgl) + dsgl_len;
- memcpy(dst, iv, s->blkcipher.iv_len);
- dst += s->blkcipher.iv_len;
+ memcpy(dst, iv, iv_len);
+ dst += iv_len;
if (imm_len != 0) {
- if (aad_len != 0) {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_skip, aad_len, dst);
- dst += aad_len;
+ if (crp->crp_aad_length != 0) {
+ crypto_copydata(crp, crp->crp_aad_start,
+ crp->crp_aad_length, dst);
+ dst += crp->crp_aad_length;
}
- crypto_copydata(crp->crp_flags, crp->crp_buf, crde->crd_skip,
- crde->crd_len, dst);
- dst += crde->crd_len;
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, dst);
+ dst += crp->crp_payload_length;
if (op_type == CHCR_DECRYPT_OP)
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_inject, hash_size_in_response, dst);
+ crypto_copydata(crp, crp->crp_digest_start,
+ hash_size_in_response, dst);
} else
ccr_write_ulptx_sgl(sc, dst, sgl_nsegs);
@@ -1094,38 +1076,19 @@ ccr_authenc(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
}
static int
-ccr_authenc_done(struct ccr_softc *sc, struct ccr_session *s,
+ccr_eta_done(struct ccr_softc *sc, struct ccr_session *s,
struct cryptop *crp, const struct cpl_fw6_pld *cpl, int error)
{
- struct cryptodesc *crd;
/*
* The updated IV to permit chained requests is at
* cpl->data[2], but OCF doesn't permit chained requests.
- *
- * For a decryption request, the hardware may do a verification
- * of the HMAC which will fail if the existing HMAC isn't in the
- * buffer. If that happens, clear the error and copy the HMAC
- * from the CPL reply into the buffer.
- *
- * For encryption requests, crd should be the cipher request
- * which will have CRD_F_ENCRYPT set. For decryption
- * requests, crp_desc will be the HMAC request which should
- * not have this flag set.
*/
- crd = crp->crp_desc;
- if (error == EBADMSG && !CHK_PAD_ERR_BIT(be64toh(cpl->data[0])) &&
- !(crd->crd_flags & CRD_F_ENCRYPT)) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject,
- s->hmac.hash_len, (c_caddr_t)(cpl + 1));
- error = 0;
- }
return (error);
}
static int
-ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
{
char iv[CHCR_MAX_CRYPTO_IV_LEN];
struct chcr_wr *crwr;
@@ -1146,21 +1109,14 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* The crypto engine doesn't handle GCM requests with an empty
* payload, so handle those in software instead.
*/
- if (crde->crd_len == 0)
+ if (crp->crp_payload_length == 0)
return (EMSGSIZE);
- /*
- * AAD is only permitted before the cipher/plain text, not
- * after.
- */
- if (crda->crd_len + crda->crd_skip > crde->crd_len + crde->crd_skip)
- return (EMSGSIZE);
-
- if (crda->crd_len + AES_BLOCK_LEN > MAX_AAD_LEN)
+ if (crp->crp_aad_length + AES_BLOCK_LEN > MAX_AAD_LEN)
return (EMSGSIZE);
hash_size_in_response = s->gmac.hash_len;
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
op_type = CHCR_ENCRYPT_OP;
else
op_type = CHCR_DECRYPT_OP;
@@ -1187,6 +1143,12 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
iv_len = s->blkcipher.iv_len;
/*
+ * GCM requests should always provide an explicit IV.
+ */
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0)
+ return (EINVAL);
+
+ /*
* The output buffer consists of the cipher text followed by
* the tag when encrypting. For decryption it only contains
* the plain text.
@@ -1196,25 +1158,26 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* output buffer.
*/
if (op_type == CHCR_ENCRYPT_OP) {
- if (iv_len + crda->crd_len + crde->crd_len +
+ if (iv_len + crp->crp_aad_length + crp->crp_payload_length +
hash_size_in_response > MAX_REQUEST_SIZE)
return (EFBIG);
} else {
- if (iv_len + crda->crd_len + crde->crd_len > MAX_REQUEST_SIZE)
+ if (iv_len + crp->crp_aad_length + crp->crp_payload_length >
+ MAX_REQUEST_SIZE)
return (EFBIG);
}
sglist_reset(sc->sg_dsgl);
error = sglist_append_sglist(sc->sg_dsgl, sc->sg_iv_aad, 0, iv_len +
- crda->crd_len);
+ crp->crp_aad_length);
if (error)
return (error);
- error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp, crde->crd_skip,
- crde->crd_len);
+ error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_ENCRYPT_OP) {
error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -1241,7 +1204,7 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* inside of the AAD region, so a second copy is always
* required.
*/
- input_len = crda->crd_len + crde->crd_len;
+ input_len = crp->crp_aad_length + crp->crp_payload_length;
if (op_type == CHCR_DECRYPT_OP)
input_len += hash_size_in_response;
if (input_len > MAX_REQUEST_SIZE)
@@ -1253,19 +1216,19 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
} else {
imm_len = 0;
sglist_reset(sc->sg_ulptx);
- if (crda->crd_len != 0) {
+ if (crp->crp_aad_length != 0) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_skip, crda->crd_len);
+ crp->crp_aad_start, crp->crp_aad_length);
if (error)
return (error);
}
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crde->crd_skip, crde->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_DECRYPT_OP) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -1273,14 +1236,14 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
sgl_len = ccr_ulptx_sgl_len(sgl_nsegs);
}
- if (crda->crd_len != 0) {
+ if (crp->crp_aad_length != 0) {
aad_start = iv_len + 1;
- aad_stop = aad_start + crda->crd_len - 1;
+ aad_stop = aad_start + crp->crp_aad_length - 1;
} else {
aad_start = 0;
aad_stop = 0;
}
- cipher_start = iv_len + crda->crd_len + 1;
+ cipher_start = iv_len + crp->crp_aad_length + 1;
if (op_type == CHCR_DECRYPT_OP)
cipher_stop = hash_size_in_response;
else
@@ -1302,29 +1265,7 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
crwr = wrtod(wr);
memset(crwr, 0, wr_len);
- /*
- * Read the existing IV from the request or generate a random
- * one if none is provided. Optionally copy the generated IV
- * into the output buffer if requested.
- *
- * If the input IV is 12 bytes, append an explicit 4-byte
- * counter of 1.
- */
- if (op_type == CHCR_ENCRYPT_OP) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, s->blkcipher.iv_len);
- else
- arc4rand(iv, s->blkcipher.iv_len, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, s->blkcipher.iv_len, iv);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, s->blkcipher.iv_len);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, s->blkcipher.iv_len, iv);
- }
+ memcpy(iv, crp->crp_iv, s->blkcipher.iv_len);
if (s->blkcipher.iv_len == 12)
*(uint32_t *)&iv[12] = htobe32(1);
@@ -1343,13 +1284,12 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
/*
* NB: cipherstop is explicitly set to 0. On encrypt it
- * should normally be set to 0 anyway (as the encrypt crd ends
- * at the end of the input). However, for decrypt the cipher
- * ends before the tag in the AUTHENC case (and authstop is
- * set to stop before the tag), but for GCM the cipher still
- * runs to the end of the buffer. Not sure if this is
- * intentional or a firmware quirk, but it is required for
- * working tag validation with GCM decryption.
+ * should normally be set to 0 anyway. However, for decrypt
+ * the cipher ends before the tag in the ETA case (and
+ * authstop is set to stop before the tag), but for GCM the
+ * cipher still runs to the end of the buffer. Not sure if
+ * this is intentional or a firmware quirk, but it is required
+ * for working tag validation with GCM decryption.
*/
crwr->sec_cpl.aadstart_cipherstop_hi = htobe32(
V_CPL_TX_SEC_PDU_AADSTART(aad_start) |
@@ -1390,17 +1330,17 @@ ccr_gcm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
memcpy(dst, iv, iv_len);
dst += iv_len;
if (imm_len != 0) {
- if (crda->crd_len != 0) {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_skip, crda->crd_len, dst);
- dst += crda->crd_len;
+ if (crp->crp_aad_length != 0) {
+ crypto_copydata(crp, crp->crp_aad_start,
+ crp->crp_aad_length, dst);
+ dst += crp->crp_aad_length;
}
- crypto_copydata(crp->crp_flags, crp->crp_buf, crde->crd_skip,
- crde->crd_len, dst);
- dst += crde->crd_len;
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, dst);
+ dst += crp->crp_payload_length;
if (op_type == CHCR_DECRYPT_OP)
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_inject, hash_size_in_response, dst);
+ crypto_copydata(crp, crp->crp_digest_start,
+ hash_size_in_response, dst);
} else
ccr_write_ulptx_sgl(sc, dst, sgl_nsegs);
@@ -1429,8 +1369,7 @@ ccr_gcm_done(struct ccr_softc *sc, struct ccr_session *s,
* performing the operation in software. Derived from swcr_authenc().
*/
static void
-ccr_gcm_soft(struct ccr_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccr_gcm_soft(struct ccr_session *s, struct cryptop *crp)
{
struct auth_hash *axf;
struct enc_xform *exf;
@@ -1478,30 +1417,19 @@ ccr_gcm_soft(struct ccr_session *s, struct cryptop *crp,
* This assumes a 12-byte IV from the crp. See longer comment
* above in ccr_gcm() for more details.
*/
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, 12);
- else
- arc4rand(iv, 12, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, 12, iv);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, 12);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, 12, iv);
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0) {
+ error = EINVAL;
+ goto out;
}
+ memcpy(iv, crp->crp_iv, 12);
*(uint32_t *)&iv[12] = htobe32(1);
axf->Reinit(auth_ctx, iv, sizeof(iv));
/* MAC the AAD. */
- for (i = 0; i < crda->crd_len; i += sizeof(block)) {
- len = imin(crda->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_skip +
- i, len, block);
+ for (i = 0; i < crp->crp_aad_length; i += sizeof(block)) {
+ len = imin(crp->crp_aad_length - i, sizeof(block));
+ crypto_copydata(crp, crp->crp_aad_start + i, len, block);
bzero(block + len, sizeof(block) - len);
axf->Update(auth_ctx, block, sizeof(block));
}
@@ -1509,16 +1437,15 @@ ccr_gcm_soft(struct ccr_session *s, struct cryptop *crp,
exf->reinit(kschedule, iv);
/* Do encryption with MAC */
- for (i = 0; i < crde->crd_len; i += sizeof(block)) {
- len = imin(crde->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf, crde->crd_skip +
- i, len, block);
+ for (i = 0; i < crp->crp_payload_length; i += sizeof(block)) {
+ len = imin(crp->crp_payload_length - i, sizeof(block));
+ crypto_copydata(crp, crp->crp_payload_start + i, len, block);
bzero(block + len, sizeof(block) - len);
- if (crde->crd_flags & CRD_F_ENCRYPT) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
exf->encrypt(kschedule, block);
axf->Update(auth_ctx, block, len);
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ crypto_copyback(crp, crp->crp_payload_start + i, len,
+ block);
} else {
axf->Update(auth_ctx, block, len);
}
@@ -1526,35 +1453,37 @@ ccr_gcm_soft(struct ccr_session *s, struct cryptop *crp,
/* Length block. */
bzero(block, sizeof(block));
- ((uint32_t *)block)[1] = htobe32(crda->crd_len * 8);
- ((uint32_t *)block)[3] = htobe32(crde->crd_len * 8);
+ ((uint32_t *)block)[1] = htobe32(crp->crp_aad_length * 8);
+ ((uint32_t *)block)[3] = htobe32(crp->crp_payload_length * 8);
axf->Update(auth_ctx, block, sizeof(block));
/* Finalize MAC. */
axf->Final(digest, auth_ctx);
/* Inject or validate tag. */
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest), digest);
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ crypto_copyback(crp, crp->crp_digest_start, sizeof(digest),
+ digest);
error = 0;
} else {
char digest2[GMAC_DIGEST_LEN];
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest2), digest2);
+ crypto_copydata(crp, crp->crp_digest_start, sizeof(digest2),
+ digest2);
if (timingsafe_bcmp(digest, digest2, sizeof(digest)) == 0) {
error = 0;
/* Tag matches, decrypt data. */
- for (i = 0; i < crde->crd_len; i += sizeof(block)) {
- len = imin(crde->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ for (i = 0; i < crp->crp_payload_length;
+ i += sizeof(block)) {
+ len = imin(crp->crp_payload_length - i,
+ sizeof(block));
+ crypto_copydata(crp, crp->crp_payload_start + i,
+ len, block);
bzero(block + len, sizeof(block) - len);
exf->decrypt(kschedule, block);
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ crypto_copyback(crp, crp->crp_payload_start + i,
+ len, block);
}
} else
error = EBADMSG;
@@ -1571,8 +1500,8 @@ out:
}
static void
-generate_ccm_b0(struct cryptodesc *crda, struct cryptodesc *crde,
- u_int hash_size_in_response, const char *iv, char *b0)
+generate_ccm_b0(struct cryptop *crp, u_int hash_size_in_response,
+ const char *iv, char *b0)
{
u_int i, payload_len;
@@ -1583,7 +1512,7 @@ generate_ccm_b0(struct cryptodesc *crda, struct cryptodesc *crde,
b0[0] |= (((hash_size_in_response - 2) / 2) << 3);
/* Store the payload length as a big-endian value. */
- payload_len = crde->crd_len;
+ payload_len = crp->crp_payload_length;
for (i = 0; i < iv[0]; i++) {
b0[CCM_CBC_BLOCK_LEN - 1 - i] = payload_len;
payload_len >>= 8;
@@ -1595,15 +1524,14 @@ generate_ccm_b0(struct cryptodesc *crda, struct cryptodesc *crde,
* start of block 1. This only assumes a 16-bit AAD length
* since T6 doesn't support large AAD sizes.
*/
- if (crda->crd_len != 0) {
+ if (crp->crp_aad_length != 0) {
b0[0] |= (1 << 6);
- *(uint16_t *)(b0 + CCM_B0_SIZE) = htobe16(crda->crd_len);
+ *(uint16_t *)(b0 + CCM_B0_SIZE) = htobe16(crp->crp_aad_length);
}
}
static int
-ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp)
{
char iv[CHCR_MAX_CRYPTO_IV_LEN];
struct ulptx_idata *idata;
@@ -1625,14 +1553,7 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* The crypto engine doesn't handle CCM requests with an empty
* payload, so handle those in software instead.
*/
- if (crde->crd_len == 0)
- return (EMSGSIZE);
-
- /*
- * AAD is only permitted before the cipher/plain text, not
- * after.
- */
- if (crda->crd_len + crda->crd_skip > crde->crd_len + crde->crd_skip)
+ if (crp->crp_payload_length == 0)
return (EMSGSIZE);
/*
@@ -1640,14 +1561,21 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* request.
*/
b0_len = CCM_B0_SIZE;
- if (crda->crd_len != 0)
+ if (crp->crp_aad_length != 0)
b0_len += CCM_AAD_FIELD_SIZE;
- aad_len = b0_len + crda->crd_len;
+ aad_len = b0_len + crp->crp_aad_length;
+
+ /*
+ * CCM requests should always provide an explicit IV (really
+ * the nonce).
+ */
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0)
+ return (EINVAL);
/*
- * Always assume a 12 byte input IV for now since that is what
- * OCF always generates. The full IV in the work request is
- * 16 bytes.
+ * Always assume a 12 byte input nonce for now since that is
+ * what OCF always generates. The full IV in the work request
+ * is 16 bytes.
*/
iv_len = AES_BLOCK_LEN;
@@ -1655,7 +1583,7 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
return (EMSGSIZE);
hash_size_in_response = s->ccm_mac.hash_len;
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
op_type = CHCR_ENCRYPT_OP;
else
op_type = CHCR_DECRYPT_OP;
@@ -1670,11 +1598,12 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* output buffer.
*/
if (op_type == CHCR_ENCRYPT_OP) {
- if (iv_len + aad_len + crde->crd_len + hash_size_in_response >
- MAX_REQUEST_SIZE)
+ if (iv_len + aad_len + crp->crp_payload_length +
+ hash_size_in_response > MAX_REQUEST_SIZE)
return (EFBIG);
} else {
- if (iv_len + aad_len + crde->crd_len > MAX_REQUEST_SIZE)
+ if (iv_len + aad_len + crp->crp_payload_length >
+ MAX_REQUEST_SIZE)
return (EFBIG);
}
sglist_reset(sc->sg_dsgl);
@@ -1682,13 +1611,13 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
aad_len);
if (error)
return (error);
- error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp, crde->crd_skip,
- crde->crd_len);
+ error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_ENCRYPT_OP) {
error = sglist_append_sglist(sc->sg_dsgl, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -1715,7 +1644,7 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
* inside of the AAD region, so a second copy is always
* required.
*/
- input_len = aad_len + crde->crd_len;
+ input_len = aad_len + crp->crp_payload_length;
if (op_type == CHCR_DECRYPT_OP)
input_len += hash_size_in_response;
if (input_len > MAX_REQUEST_SIZE)
@@ -1729,19 +1658,19 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
imm_len = b0_len;
sglist_reset(sc->sg_ulptx);
- if (crda->crd_len != 0) {
+ if (crp->crp_aad_length != 0) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_skip, crda->crd_len);
+ crp->crp_aad_start, crp->crp_aad_length);
if (error)
return (error);
}
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crde->crd_skip, crde->crd_len);
+ crp->crp_payload_start, crp->crp_payload_length);
if (error)
return (error);
if (op_type == CHCR_DECRYPT_OP) {
error = sglist_append_sglist(sc->sg_ulptx, sc->sg_crp,
- crda->crd_inject, hash_size_in_response);
+ crp->crp_digest_start, hash_size_in_response);
if (error)
return (error);
}
@@ -1774,27 +1703,12 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
memset(crwr, 0, wr_len);
/*
- * Read the nonce from the request or generate a random one if
- * none is provided. Use the nonce to generate the full IV
- * with the counter set to 0.
+ * Read the nonce from the request. Use the nonce to generate
+ * the full IV with the counter set to 0.
*/
memset(iv, 0, iv_len);
iv[0] = (15 - AES_CCM_IV_LEN) - 1;
- if (op_type == CHCR_ENCRYPT_OP) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv + 1, crde->crd_iv, AES_CCM_IV_LEN);
- else
- arc4rand(iv + 1, AES_CCM_IV_LEN, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, AES_CCM_IV_LEN, iv + 1);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv + 1, crde->crd_iv, AES_CCM_IV_LEN);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, AES_CCM_IV_LEN, iv + 1);
- }
+ memcpy(iv + 1, crp->crp_iv, AES_CCM_IV_LEN);
ccr_populate_wreq(sc, crwr, kctx_len, wr_len, imm_len, sgl_len, 0,
crp);
@@ -1851,20 +1765,20 @@ ccr_ccm(struct ccr_softc *sc, struct ccr_session *s, struct cryptop *crp,
dst += sizeof(struct cpl_rx_phys_dsgl) + dsgl_len;
memcpy(dst, iv, iv_len);
dst += iv_len;
- generate_ccm_b0(crda, crde, hash_size_in_response, iv, dst);
+ generate_ccm_b0(crp, hash_size_in_response, iv, dst);
if (sgl_nsegs == 0) {
dst += b0_len;
- if (crda->crd_len != 0) {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_skip, crda->crd_len, dst);
- dst += crda->crd_len;
+ if (crp->crp_aad_length != 0) {
+ crypto_copydata(crp, crp->crp_aad_start,
+ crp->crp_aad_length, dst);
+ dst += crp->crp_aad_length;
}
- crypto_copydata(crp->crp_flags, crp->crp_buf, crde->crd_skip,
- crde->crd_len, dst);
- dst += crde->crd_len;
+ crypto_copydata(crp, crp->crp_payload_start,
+ crp->crp_payload_length, dst);
+ dst += crp->crp_payload_length;
if (op_type == CHCR_DECRYPT_OP)
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crda->crd_inject, hash_size_in_response, dst);
+ crypto_copydata(crp, crp->crp_digest_start,
+ hash_size_in_response, dst);
} else {
dst += CCM_B0_SIZE;
if (b0_len > CCM_B0_SIZE) {
@@ -1911,8 +1825,7 @@ ccr_ccm_done(struct ccr_softc *sc, struct ccr_session *s,
* performing the operation in software. Derived from swcr_authenc().
*/
static void
-ccr_ccm_soft(struct ccr_session *s, struct cryptop *crp,
- struct cryptodesc *crda, struct cryptodesc *crde)
+ccr_ccm_soft(struct ccr_session *s, struct cryptop *crp)
{
struct auth_hash *axf;
struct enc_xform *exf;
@@ -1956,31 +1869,20 @@ ccr_ccm_soft(struct ccr_session *s, struct cryptop *crp,
if (error)
goto out;
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, AES_CCM_IV_LEN);
- else
- arc4rand(iv, AES_CCM_IV_LEN, 0);
- if ((crde->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, AES_CCM_IV_LEN, iv);
- } else {
- if (crde->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(iv, crde->crd_iv, AES_CCM_IV_LEN);
- else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_inject, AES_CCM_IV_LEN, iv);
+ if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0) {
+ error = EINVAL;
+ goto out;
}
+ memcpy(iv, crp->crp_iv, AES_CCM_IV_LEN);
- auth_ctx->aes_cbc_mac_ctx.authDataLength = crda->crd_len;
- auth_ctx->aes_cbc_mac_ctx.cryptDataLength = crde->crd_len;
+ auth_ctx->aes_cbc_mac_ctx.authDataLength = crp->crp_aad_length;
+ auth_ctx->aes_cbc_mac_ctx.cryptDataLength = crp->crp_payload_length;
axf->Reinit(auth_ctx, iv, sizeof(iv));
/* MAC the AAD. */
- for (i = 0; i < crda->crd_len; i += sizeof(block)) {
- len = imin(crda->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_skip +
- i, len, block);
+ for (i = 0; i < crp->crp_aad_length; i += sizeof(block)) {
+ len = imin(crp->crp_aad_length - i, sizeof(block));
+ crypto_copydata(crp, crp->crp_aad_start + i, len, block);
bzero(block + len, sizeof(block) - len);
axf->Update(auth_ctx, block, sizeof(block));
}
@@ -1988,16 +1890,15 @@ ccr_ccm_soft(struct ccr_session *s, struct cryptop *crp,
exf->reinit(kschedule, iv);
/* Do encryption/decryption with MAC */
- for (i = 0; i < crde->crd_len; i += sizeof(block)) {
- len = imin(crde->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf, crde->crd_skip +
- i, len, block);
+ for (i = 0; i < crp->crp_payload_length; i += sizeof(block)) {
+ len = imin(crp->crp_payload_length - i, sizeof(block));
+ crypto_copydata(crp, crp->crp_payload_start + i, len, block);
bzero(block + len, sizeof(block) - len);
- if (crde->crd_flags & CRD_F_ENCRYPT) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
axf->Update(auth_ctx, block, len);
exf->encrypt(kschedule, block);
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ crypto_copyback(crp, crp->crp_payload_start + i, len,
+ block);
} else {
exf->decrypt(kschedule, block);
axf->Update(auth_ctx, block, len);
@@ -2008,28 +1909,30 @@ ccr_ccm_soft(struct ccr_session *s, struct cryptop *crp,
axf->Final(digest, auth_ctx);
/* Inject or validate tag. */
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- crypto_copyback(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest), digest);
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ crypto_copyback(crp, crp->crp_digest_start, sizeof(digest),
+ digest);
error = 0;
} else {
- char digest2[GMAC_DIGEST_LEN];
+ char digest2[AES_CBC_MAC_HASH_LEN];
- crypto_copydata(crp->crp_flags, crp->crp_buf, crda->crd_inject,
- sizeof(digest2), digest2);
+ crypto_copydata(crp, crp->crp_digest_start, sizeof(digest2),
+ digest2);
if (timingsafe_bcmp(digest, digest2, sizeof(digest)) == 0) {
error = 0;
/* Tag matches, decrypt data. */
exf->reinit(kschedule, iv);
- for (i = 0; i < crde->crd_len; i += sizeof(block)) {
- len = imin(crde->crd_len - i, sizeof(block));
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ for (i = 0; i < crp->crp_payload_length;
+ i += sizeof(block)) {
+ len = imin(crp->crp_payload_length - i,
+ sizeof(block));
+ crypto_copydata(crp, crp->crp_payload_start + i,
+ len, block);
bzero(block + len, sizeof(block) - len);
exf->decrypt(kschedule, block);
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crde->crd_skip + i, len, block);
+ crypto_copyback(crp, crp->crp_payload_start + i,
+ len, block);
}
} else
error = EBADMSG;
@@ -2096,11 +1999,11 @@ ccr_sysctls(struct ccr_softc *sc)
SYSCTL_ADD_U64(ctx, children, OID_AUTO, "cipher_decrypt", CTLFLAG_RD,
&sc->stats_blkcipher_decrypt, 0,
"Cipher decryption requests submitted");
- SYSCTL_ADD_U64(ctx, children, OID_AUTO, "authenc_encrypt", CTLFLAG_RD,
- &sc->stats_authenc_encrypt, 0,
+ SYSCTL_ADD_U64(ctx, children, OID_AUTO, "eta_encrypt", CTLFLAG_RD,
+ &sc->stats_eta_encrypt, 0,
"Combined AES+HMAC encryption requests submitted");
- SYSCTL_ADD_U64(ctx, children, OID_AUTO, "authenc_decrypt", CTLFLAG_RD,
- &sc->stats_authenc_decrypt, 0,
+ SYSCTL_ADD_U64(ctx, children, OID_AUTO, "eta_decrypt", CTLFLAG_RD,
+ &sc->stats_eta_decrypt, 0,
"Combined AES+HMAC decryption requests submitted");
SYSCTL_ADD_U64(ctx, children, OID_AUTO, "gcm_encrypt", CTLFLAG_RD,
&sc->stats_gcm_encrypt, 0, "AES-GCM encryption requests submitted");
@@ -2161,25 +2064,6 @@ ccr_attach(device_t dev)
sc->sg_iv_aad = sglist_build(sc->iv_aad_buf, MAX_AAD_LEN, M_WAITOK);
ccr_sysctls(sc);
- crypto_register(cid, CRYPTO_SHA1, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_224, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_256, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_384, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_512, 0, 0);
- crypto_register(cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_224_HMAC, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_256_HMAC, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_384_HMAC, 0, 0);
- crypto_register(cid, CRYPTO_SHA2_512_HMAC, 0, 0);
- crypto_register(cid, CRYPTO_AES_CBC, 0, 0);
- crypto_register(cid, CRYPTO_AES_ICM, 0, 0);
- crypto_register(cid, CRYPTO_AES_NIST_GCM_16, 0, 0);
- crypto_register(cid, CRYPTO_AES_128_NIST_GMAC, 0, 0);
- crypto_register(cid, CRYPTO_AES_192_NIST_GMAC, 0, 0);
- crypto_register(cid, CRYPTO_AES_256_NIST_GMAC, 0, 0);
- crypto_register(cid, CRYPTO_AES_XTS, 0, 0);
- crypto_register(cid, CRYPTO_AES_CCM_16, 0, 0);
- crypto_register(cid, CRYPTO_AES_CCM_CBC_MAC, 0, 0);
return (0);
}
@@ -2207,48 +2091,48 @@ ccr_detach(device_t dev)
}
static void
-ccr_init_hash_digest(struct ccr_session *s, int cri_alg)
+ccr_init_hash_digest(struct ccr_session *s)
{
union authctx auth_ctx;
struct auth_hash *axf;
axf = s->hmac.auth_hash;
axf->Init(&auth_ctx);
- t4_copy_partial_hash(cri_alg, &auth_ctx, s->hmac.pads);
+ t4_copy_partial_hash(axf->type, &auth_ctx, s->hmac.pads);
}
-static int
+static bool
ccr_aes_check_keylen(int alg, int klen)
{
- switch (klen) {
+ switch (klen * 8) {
case 128:
case 192:
if (alg == CRYPTO_AES_XTS)
- return (EINVAL);
+ return (false);
break;
case 256:
break;
case 512:
if (alg != CRYPTO_AES_XTS)
- return (EINVAL);
+ return (false);
break;
default:
- return (EINVAL);
+ return (false);
}
- return (0);
+ return (true);
}
static void
-ccr_aes_setkey(struct ccr_session *s, int alg, const void *key, int klen)
+ccr_aes_setkey(struct ccr_session *s, const void *key, int klen)
{
unsigned int ck_size, iopad_size, kctx_flits, kctx_len, kbits, mk_size;
unsigned int opad_present;
- if (alg == CRYPTO_AES_XTS)
- kbits = klen / 2;
+ if (s->blkcipher.cipher_mode == SCMD_CIPH_MODE_AES_XTS)
+ kbits = (klen / 2) * 8;
else
- kbits = klen;
+ kbits = klen * 8;
switch (kbits) {
case 128:
ck_size = CHCR_KEYCTX_CIPHER_KEY_SIZE_128;
@@ -2263,18 +2147,18 @@ ccr_aes_setkey(struct ccr_session *s, int alg, const void *key, int klen)
panic("should not get here");
}
- s->blkcipher.key_len = klen / 8;
+ s->blkcipher.key_len = klen;
memcpy(s->blkcipher.enckey, key, s->blkcipher.key_len);
- switch (alg) {
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_XTS:
+ switch (s->blkcipher.cipher_mode) {
+ case SCMD_CIPH_MODE_AES_CBC:
+ case SCMD_CIPH_MODE_AES_XTS:
t4_aes_getdeckey(s->blkcipher.deckey, key, kbits);
break;
}
kctx_len = roundup2(s->blkcipher.key_len, 16);
switch (s->mode) {
- case AUTHENC:
+ case ETA:
mk_size = s->hmac.mk_size;
opad_present = 1;
iopad_size = roundup2(s->hmac.partial_digest_len, 16);
@@ -2309,171 +2193,220 @@ ccr_aes_setkey(struct ccr_session *s, int alg, const void *key, int klen)
}
kctx_flits = (sizeof(struct _key_ctx) + kctx_len) / 16;
s->blkcipher.key_ctx_hdr = htobe32(V_KEY_CONTEXT_CTX_LEN(kctx_flits) |
- V_KEY_CONTEXT_DUAL_CK(alg == CRYPTO_AES_XTS) |
+ V_KEY_CONTEXT_DUAL_CK(s->blkcipher.cipher_mode ==
+ SCMD_CIPH_MODE_AES_XTS) |
V_KEY_CONTEXT_OPAD_PRESENT(opad_present) |
V_KEY_CONTEXT_SALT_PRESENT(1) | V_KEY_CONTEXT_CK_SIZE(ck_size) |
V_KEY_CONTEXT_MK_SIZE(mk_size) | V_KEY_CONTEXT_VALID(1));
}
+static bool
+ccr_auth_supported(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1:
+ case CRYPTO_SHA2_224:
+ case CRYPTO_SHA2_256:
+ case CRYPTO_SHA2_384:
+ case CRYPTO_SHA2_512:
+ case CRYPTO_SHA1_HMAC:
+ case CRYPTO_SHA2_224_HMAC:
+ case CRYPTO_SHA2_256_HMAC:
+ case CRYPTO_SHA2_384_HMAC:
+ case CRYPTO_SHA2_512_HMAC:
+ break;
+ default:
+ return (false);
+ }
+ return (true);
+}
+
+static bool
+ccr_cipher_supported(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_AES_ICM:
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_AES_XTS:
+ if (csp->csp_ivlen != AES_XTS_IV_LEN)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+ return (ccr_aes_check_keylen(csp->csp_cipher_alg,
+ csp->csp_cipher_klen));
+}
+
static int
-ccr_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+ccr_cipher_mode(const struct crypto_session_params *csp)
{
- struct ccr_softc *sc;
- struct ccr_session *s;
- struct auth_hash *auth_hash;
- struct cryptoini *c, *hash, *cipher;
- unsigned int auth_mode, cipher_mode, iv_len, mk_size;
- unsigned int partial_digest_len;
- int error;
- bool gcm_hash, hmac;
- if (cri == NULL)
- return (EINVAL);
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ return (SCMD_CIPH_MODE_AES_CBC);
+ case CRYPTO_AES_ICM:
+ return (SCMD_CIPH_MODE_AES_CTR);
+ case CRYPTO_AES_NIST_GCM_16:
+ return (SCMD_CIPH_MODE_AES_GCM);
+ case CRYPTO_AES_XTS:
+ return (SCMD_CIPH_MODE_AES_XTS);
+ case CRYPTO_AES_CCM_16:
+ return (SCMD_CIPH_MODE_AES_CCM);
+ default:
+ return (SCMD_CIPH_MODE_NOP);
+ }
+}
+
+static int
+ccr_probesession(device_t dev, const struct crypto_session_params *csp)
+{
+ unsigned int cipher_mode;
- gcm_hash = false;
- hmac = false;
- cipher = NULL;
- hash = NULL;
- auth_hash = NULL;
- auth_mode = SCMD_AUTH_MODE_NOP;
- cipher_mode = SCMD_CIPH_MODE_NOP;
- iv_len = 0;
- mk_size = 0;
- partial_digest_len = 0;
- for (c = cri; c != NULL; c = c->cri_next) {
- switch (c->cri_alg) {
- case CRYPTO_SHA1:
- case CRYPTO_SHA2_224:
- case CRYPTO_SHA2_256:
- case CRYPTO_SHA2_384:
- case CRYPTO_SHA2_512:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_224_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- case CRYPTO_AES_CCM_CBC_MAC:
- if (hash)
+ if (csp->csp_flags != 0)
+ return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!ccr_auth_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!ccr_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_AEAD:
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_NIST_GCM_16:
+ if (csp->csp_ivlen != AES_GCM_IV_LEN)
+ return (EINVAL);
+ if (csp->csp_auth_mlen < 0 ||
+ csp->csp_auth_mlen > AES_GMAC_HASH_LEN)
return (EINVAL);
- hash = c;
- switch (c->cri_alg) {
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- auth_hash = &auth_hash_hmac_sha1;
- auth_mode = SCMD_AUTH_MODE_SHA1;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_160;
- partial_digest_len = SHA1_HASH_LEN;
- break;
- case CRYPTO_SHA2_224:
- case CRYPTO_SHA2_224_HMAC:
- auth_hash = &auth_hash_hmac_sha2_224;
- auth_mode = SCMD_AUTH_MODE_SHA224;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_256;
- partial_digest_len = SHA2_256_HASH_LEN;
- break;
- case CRYPTO_SHA2_256:
- case CRYPTO_SHA2_256_HMAC:
- auth_hash = &auth_hash_hmac_sha2_256;
- auth_mode = SCMD_AUTH_MODE_SHA256;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_256;
- partial_digest_len = SHA2_256_HASH_LEN;
- break;
- case CRYPTO_SHA2_384:
- case CRYPTO_SHA2_384_HMAC:
- auth_hash = &auth_hash_hmac_sha2_384;
- auth_mode = SCMD_AUTH_MODE_SHA512_384;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_512;
- partial_digest_len = SHA2_512_HASH_LEN;
- break;
- case CRYPTO_SHA2_512:
- case CRYPTO_SHA2_512_HMAC:
- auth_hash = &auth_hash_hmac_sha2_512;
- auth_mode = SCMD_AUTH_MODE_SHA512_512;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_512;
- partial_digest_len = SHA2_512_HASH_LEN;
- break;
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
- gcm_hash = true;
- auth_mode = SCMD_AUTH_MODE_GHASH;
- mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_128;
- break;
- case CRYPTO_AES_CCM_CBC_MAC:
- auth_mode = SCMD_AUTH_MODE_CBCMAC;
- break;
- }
- switch (c->cri_alg) {
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_SHA2_224_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- hmac = true;
- break;
- }
break;
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- case CRYPTO_AES_NIST_GCM_16:
- case CRYPTO_AES_XTS:
case CRYPTO_AES_CCM_16:
- if (cipher)
+ if (csp->csp_ivlen != AES_CCM_IV_LEN)
+ return (EINVAL);
+ if (csp->csp_auth_mlen < 0 ||
+ csp->csp_auth_mlen > AES_CBC_MAC_HASH_LEN)
return (EINVAL);
- cipher = c;
- switch (c->cri_alg) {
- case CRYPTO_AES_CBC:
- cipher_mode = SCMD_CIPH_MODE_AES_CBC;
- iv_len = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_ICM:
- cipher_mode = SCMD_CIPH_MODE_AES_CTR;
- iv_len = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_NIST_GCM_16:
- cipher_mode = SCMD_CIPH_MODE_AES_GCM;
- iv_len = AES_GCM_IV_LEN;
- break;
- case CRYPTO_AES_XTS:
- cipher_mode = SCMD_CIPH_MODE_AES_XTS;
- iv_len = AES_BLOCK_LEN;
- break;
- case CRYPTO_AES_CCM_16:
- cipher_mode = SCMD_CIPH_MODE_AES_CCM;
- iv_len = AES_CCM_IV_LEN;
- break;
- }
- if (c->cri_key != NULL) {
- error = ccr_aes_check_keylen(c->cri_alg,
- c->cri_klen);
- if (error)
- return (error);
- }
break;
default:
return (EINVAL);
}
- }
- if (gcm_hash != (cipher_mode == SCMD_CIPH_MODE_AES_GCM))
- return (EINVAL);
- if ((auth_mode == SCMD_AUTH_MODE_CBCMAC) !=
- (cipher_mode == SCMD_CIPH_MODE_AES_CCM))
- return (EINVAL);
- if (hash == NULL && cipher == NULL)
+ break;
+ case CSP_MODE_ETA:
+ if (!ccr_auth_supported(csp) || !ccr_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
- if (hash != NULL) {
- if (hmac || gcm_hash || auth_mode == SCMD_AUTH_MODE_CBCMAC) {
- if (hash->cri_key == NULL)
- return (EINVAL);
- } else {
- if (hash->cri_key != NULL)
- return (EINVAL);
- }
}
+ if (csp->csp_cipher_klen != 0) {
+ cipher_mode = ccr_cipher_mode(csp);
+ if (cipher_mode == SCMD_CIPH_MODE_NOP)
+ return (EINVAL);
+ }
+
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+static int
+ccr_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct ccr_softc *sc;
+ struct ccr_session *s;
+ struct auth_hash *auth_hash;
+ unsigned int auth_mode, cipher_mode, mk_size;
+ unsigned int partial_digest_len;
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA1:
+ case CRYPTO_SHA1_HMAC:
+ auth_hash = &auth_hash_hmac_sha1;
+ auth_mode = SCMD_AUTH_MODE_SHA1;
+ mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_160;
+ partial_digest_len = SHA1_HASH_LEN;
+ break;
+ case CRYPTO_SHA2_224:
+ case CRYPTO_SHA2_224_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_224;
+ auth_mode = SCMD_AUTH_MODE_SHA224;
+ mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_256;
+ partial_digest_len = SHA2_256_HASH_LEN;
+ break;
+ case CRYPTO_SHA2_256:
+ case CRYPTO_SHA2_256_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_256;
+ auth_mode = SCMD_AUTH_MODE_SHA256;
+ mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_256;
+ partial_digest_len = SHA2_256_HASH_LEN;
+ break;
+ case CRYPTO_SHA2_384:
+ case CRYPTO_SHA2_384_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_384;
+ auth_mode = SCMD_AUTH_MODE_SHA512_384;
+ mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_512;
+ partial_digest_len = SHA2_512_HASH_LEN;
+ break;
+ case CRYPTO_SHA2_512:
+ case CRYPTO_SHA2_512_HMAC:
+ auth_hash = &auth_hash_hmac_sha2_512;
+ auth_mode = SCMD_AUTH_MODE_SHA512_512;
+ mk_size = CHCR_KEYCTX_MAC_KEY_SIZE_512;
+ partial_digest_len = SHA2_512_HASH_LEN;
+ break;
+ default:
+ auth_hash = NULL;
+ auth_mode = SCMD_AUTH_MODE_NOP;
+ mk_size = 0;
+ partial_digest_len = 0;
+ break;
+ }
+
+ cipher_mode = ccr_cipher_mode(csp);
+
+#ifdef INVARIANTS
+ switch (csp->csp_mode) {
+ case CSP_MODE_CIPHER:
+ if (cipher_mode == SCMD_CIPH_MODE_NOP ||
+ cipher_mode == SCMD_CIPH_MODE_AES_GCM ||
+ cipher_mode == SCMD_CIPH_MODE_AES_CCM)
+ panic("invalid cipher algo");
+ break;
+ case CSP_MODE_DIGEST:
+ if (auth_mode == SCMD_AUTH_MODE_NOP)
+ panic("invalid auth algo");
+ break;
+ case CSP_MODE_AEAD:
+ if (cipher_mode != SCMD_CIPH_MODE_AES_GCM &&
+ cipher_mode != SCMD_CIPH_MODE_AES_CCM)
+ panic("invalid aead cipher algo");
+ if (auth_mode != SCMD_AUTH_MODE_NOP)
+ panic("invalid aead auth aglo");
+ break;
+ case CSP_MODE_ETA:
+ if (cipher_mode == SCMD_CIPH_MODE_NOP ||
+ cipher_mode == SCMD_CIPH_MODE_AES_GCM ||
+ cipher_mode == SCMD_CIPH_MODE_AES_CCM)
+ panic("invalid cipher algo");
+ if (auth_mode == SCMD_AUTH_MODE_NOP)
+ panic("invalid auth algo");
+ break;
+ default:
+ panic("invalid csp mode");
+ }
+#endif
+
sc = device_get_softc(dev);
/*
@@ -2493,54 +2426,61 @@ ccr_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
s = crypto_get_driver_session(cses);
- if (gcm_hash)
- s->mode = GCM;
- else if (cipher_mode == SCMD_CIPH_MODE_AES_CCM)
- s->mode = CCM;
- else if (hash != NULL && cipher != NULL)
- s->mode = AUTHENC;
- else if (hash != NULL) {
- if (hmac)
+ switch (csp->csp_mode) {
+ case CSP_MODE_AEAD:
+ if (cipher_mode == SCMD_CIPH_MODE_AES_CCM)
+ s->mode = CCM;
+ else
+ s->mode = GCM;
+ break;
+ case CSP_MODE_ETA:
+ s->mode = ETA;
+ break;
+ case CSP_MODE_DIGEST:
+ if (csp->csp_auth_klen != 0)
s->mode = HMAC;
else
s->mode = HASH;
- } else {
- MPASS(cipher != NULL);
+ break;
+ case CSP_MODE_CIPHER:
s->mode = BLKCIPHER;
+ break;
}
- if (gcm_hash) {
- if (hash->cri_mlen == 0)
+
+ if (s->mode == GCM) {
+ if (csp->csp_auth_mlen == 0)
s->gmac.hash_len = AES_GMAC_HASH_LEN;
else
- s->gmac.hash_len = hash->cri_mlen;
- t4_init_gmac_hash(hash->cri_key, hash->cri_klen,
+ s->gmac.hash_len = csp->csp_auth_mlen;
+ t4_init_gmac_hash(csp->csp_cipher_key, csp->csp_cipher_klen,
s->gmac.ghash_h);
- } else if (auth_mode == SCMD_AUTH_MODE_CBCMAC) {
- if (hash->cri_mlen == 0)
+ } else if (s->mode == CCM) {
+ if (csp->csp_auth_mlen == 0)
s->ccm_mac.hash_len = AES_CBC_MAC_HASH_LEN;
else
- s->ccm_mac.hash_len = hash->cri_mlen;
- } else if (hash != NULL) {
+ s->ccm_mac.hash_len = csp->csp_auth_mlen;
+ } else if (auth_mode != SCMD_AUTH_MODE_NOP) {
s->hmac.auth_hash = auth_hash;
s->hmac.auth_mode = auth_mode;
s->hmac.mk_size = mk_size;
s->hmac.partial_digest_len = partial_digest_len;
- if (hash->cri_mlen == 0)
+ if (csp->csp_auth_mlen == 0)
s->hmac.hash_len = auth_hash->hashsize;
else
- s->hmac.hash_len = hash->cri_mlen;
- if (hmac)
+ s->hmac.hash_len = csp->csp_auth_mlen;
+ if (csp->csp_auth_key != NULL)
t4_init_hmac_digest(auth_hash, partial_digest_len,
- hash->cri_key, hash->cri_klen, s->hmac.pads);
+ csp->csp_auth_key, csp->csp_auth_klen,
+ s->hmac.pads);
else
- ccr_init_hash_digest(s, hash->cri_alg);
+ ccr_init_hash_digest(s);
}
- if (cipher != NULL) {
+ if (cipher_mode != SCMD_CIPH_MODE_NOP) {
s->blkcipher.cipher_mode = cipher_mode;
- s->blkcipher.iv_len = iv_len;
- if (cipher->cri_key != NULL)
- ccr_aes_setkey(s, cipher->cri_alg, cipher->cri_key,
- cipher->cri_klen);
+ s->blkcipher.iv_len = csp->csp_ivlen;
+ if (csp->csp_cipher_key != NULL)
+ ccr_aes_setkey(s, csp->csp_cipher_key,
+ csp->csp_cipher_klen);
}
s->active = true;
@@ -2568,15 +2508,12 @@ ccr_freesession(device_t dev, crypto_session_t cses)
static int
ccr_process(device_t dev, struct cryptop *crp, int hint)
{
+ const struct crypto_session_params *csp;
struct ccr_softc *sc;
struct ccr_session *s;
- struct cryptodesc *crd, *crda, *crde;
int error;
- if (crp == NULL)
- return (EINVAL);
-
- crd = crp->crp_desc;
+ csp = crypto_get_params(crp->crp_session);
s = crypto_get_driver_session(crp->crp_session);
sc = device_get_softc(dev);
@@ -2594,141 +2531,82 @@ ccr_process(device_t dev, struct cryptop *crp, int hint)
sc->stats_hash++;
break;
case HMAC:
- if (crd->crd_flags & CRD_F_KEY_EXPLICIT)
+ if (crp->crp_auth_key != NULL)
t4_init_hmac_digest(s->hmac.auth_hash,
- s->hmac.partial_digest_len, crd->crd_key,
- crd->crd_klen, s->hmac.pads);
+ s->hmac.partial_digest_len, crp->crp_auth_key,
+ csp->csp_auth_klen, s->hmac.pads);
error = ccr_hash(sc, s, crp);
if (error == 0)
sc->stats_hmac++;
break;
case BLKCIPHER:
- if (crd->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccr_aes_check_keylen(crd->crd_alg,
- crd->crd_klen);
- if (error)
- break;
- ccr_aes_setkey(s, crd->crd_alg, crd->crd_key,
- crd->crd_klen);
- }
+ if (crp->crp_cipher_key != NULL)
+ ccr_aes_setkey(s, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
error = ccr_blkcipher(sc, s, crp);
if (error == 0) {
- if (crd->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
sc->stats_blkcipher_encrypt++;
else
sc->stats_blkcipher_decrypt++;
}
break;
- case AUTHENC:
- error = 0;
- switch (crd->crd_alg) {
- case CRYPTO_AES_CBC:
- case CRYPTO_AES_ICM:
- case CRYPTO_AES_XTS:
- /* Only encrypt-then-authenticate supported. */
- crde = crd;
- crda = crd->crd_next;
- if (!(crde->crd_flags & CRD_F_ENCRYPT)) {
- error = EINVAL;
- break;
- }
- break;
- default:
- crda = crd;
- crde = crd->crd_next;
- if (crde->crd_flags & CRD_F_ENCRYPT) {
- error = EINVAL;
- break;
- }
- break;
- }
- if (error)
- break;
- if (crda->crd_flags & CRD_F_KEY_EXPLICIT)
+ case ETA:
+ if (crp->crp_auth_key != NULL)
t4_init_hmac_digest(s->hmac.auth_hash,
- s->hmac.partial_digest_len, crda->crd_key,
- crda->crd_klen, s->hmac.pads);
- if (crde->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccr_aes_check_keylen(crde->crd_alg,
- crde->crd_klen);
- if (error)
- break;
- ccr_aes_setkey(s, crde->crd_alg, crde->crd_key,
- crde->crd_klen);
- }
- error = ccr_authenc(sc, s, crp, crda, crde);
+ s->hmac.partial_digest_len, crp->crp_auth_key,
+ csp->csp_auth_klen, s->hmac.pads);
+ if (crp->crp_cipher_key != NULL)
+ ccr_aes_setkey(s, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
+ error = ccr_eta(sc, s, crp);
if (error == 0) {
- if (crde->crd_flags & CRD_F_ENCRYPT)
- sc->stats_authenc_encrypt++;
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
+ sc->stats_eta_encrypt++;
else
- sc->stats_authenc_decrypt++;
+ sc->stats_eta_decrypt++;
}
break;
case GCM:
- error = 0;
- if (crd->crd_alg == CRYPTO_AES_NIST_GCM_16) {
- crde = crd;
- crda = crd->crd_next;
- } else {
- crda = crd;
- crde = crd->crd_next;
- }
- if (crda->crd_flags & CRD_F_KEY_EXPLICIT)
- t4_init_gmac_hash(crda->crd_key, crda->crd_klen,
- s->gmac.ghash_h);
- if (crde->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccr_aes_check_keylen(crde->crd_alg,
- crde->crd_klen);
- if (error)
- break;
- ccr_aes_setkey(s, crde->crd_alg, crde->crd_key,
- crde->crd_klen);
+ if (crp->crp_cipher_key != NULL) {
+ t4_init_gmac_hash(crp->crp_cipher_key,
+ csp->csp_cipher_klen, s->gmac.ghash_h);
+ ccr_aes_setkey(s, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
}
- if (crde->crd_len == 0) {
+ if (crp->crp_payload_length == 0) {
mtx_unlock(&sc->lock);
- ccr_gcm_soft(s, crp, crda, crde);
+ ccr_gcm_soft(s, crp);
return (0);
}
- error = ccr_gcm(sc, s, crp, crda, crde);
+ error = ccr_gcm(sc, s, crp);
if (error == EMSGSIZE) {
sc->stats_sw_fallback++;
mtx_unlock(&sc->lock);
- ccr_gcm_soft(s, crp, crda, crde);
+ ccr_gcm_soft(s, crp);
return (0);
}
if (error == 0) {
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
sc->stats_gcm_encrypt++;
else
sc->stats_gcm_decrypt++;
}
break;
case CCM:
- error = 0;
- if (crd->crd_alg == CRYPTO_AES_CCM_16) {
- crde = crd;
- crda = crd->crd_next;
- } else {
- crda = crd;
- crde = crd->crd_next;
- }
- if (crde->crd_flags & CRD_F_KEY_EXPLICIT) {
- error = ccr_aes_check_keylen(crde->crd_alg,
- crde->crd_klen);
- if (error)
- break;
- ccr_aes_setkey(s, crde->crd_alg, crde->crd_key,
- crde->crd_klen);
+ if (crp->crp_cipher_key != NULL) {
+ ccr_aes_setkey(s, crp->crp_cipher_key,
+ csp->csp_cipher_klen);
}
- error = ccr_ccm(sc, s, crp, crda, crde);
+ error = ccr_ccm(sc, s, crp);
if (error == EMSGSIZE) {
sc->stats_sw_fallback++;
mtx_unlock(&sc->lock);
- ccr_ccm_soft(s, crp, crda, crde);
+ ccr_ccm_soft(s, crp);
return (0);
}
if (error == 0) {
- if (crde->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
sc->stats_ccm_encrypt++;
else
sc->stats_ccm_decrypt++;
@@ -2789,8 +2667,8 @@ do_cpl6_fw_pld(struct sge_iq *iq, const struct rss_header *rss,
case BLKCIPHER:
error = ccr_blkcipher_done(sc, s, crp, cpl, error);
break;
- case AUTHENC:
- error = ccr_authenc_done(sc, s, crp, cpl, error);
+ case ETA:
+ error = ccr_eta_done(sc, s, crp, cpl, error);
break;
case GCM:
error = ccr_gcm_done(sc, s, crp, cpl, error);
@@ -2835,6 +2713,7 @@ static device_method_t ccr_methods[] = {
DEVMETHOD(device_attach, ccr_attach),
DEVMETHOD(device_detach, ccr_detach),
+ DEVMETHOD(cryptodev_probesession, ccr_probesession),
DEVMETHOD(cryptodev_newsession, ccr_newsession),
DEVMETHOD(cryptodev_freesession, ccr_freesession),
DEVMETHOD(cryptodev_process, ccr_process),
diff --git a/sys/dev/cxgbe/crypto/t4_keyctx.c b/sys/dev/cxgbe/crypto/t4_keyctx.c
index bceade0ec810..0f034f1be334 100644
--- a/sys/dev/cxgbe/crypto/t4_keyctx.c
+++ b/sys/dev/cxgbe/crypto/t4_keyctx.c
@@ -73,7 +73,7 @@ t4_init_gmac_hash(const char *key, int klen, char *ghash)
uint32_t keysched[4 * (RIJNDAEL_MAXNR + 1)];
int rounds;
- rounds = rijndaelKeySetupEnc(keysched, key, klen);
+ rounds = rijndaelKeySetupEnc(keysched, key, klen * 8);
rijndaelEncrypt(keysched, rounds, zeroes, ghash);
}
@@ -118,45 +118,19 @@ t4_copy_partial_hash(int alg, union authctx *auth_ctx, void *dst)
void
t4_init_hmac_digest(struct auth_hash *axf, u_int partial_digest_len,
- char *key, int klen, char *dst)
+ const char *key, int klen, char *dst)
{
union authctx auth_ctx;
- char ipad[SHA2_512_BLOCK_LEN], opad[SHA2_512_BLOCK_LEN];
- u_int i;
-
- /*
- * If the key is larger than the block size, use the digest of
- * the key as the key instead.
- */
- klen /= 8;
- if (klen > axf->blocksize) {
- axf->Init(&auth_ctx);
- axf->Update(&auth_ctx, key, klen);
- axf->Final(ipad, &auth_ctx);
- klen = axf->hashsize;
- } else
- memcpy(ipad, key, klen);
-
- memset(ipad + klen, 0, axf->blocksize - klen);
- memcpy(opad, ipad, axf->blocksize);
-
- for (i = 0; i < axf->blocksize; i++) {
- ipad[i] ^= HMAC_IPAD_VAL;
- opad[i] ^= HMAC_OPAD_VAL;
- }
- /*
- * Hash the raw ipad and opad and store the partial results in
- * the key context.
- */
- axf->Init(&auth_ctx);
- axf->Update(&auth_ctx, ipad, axf->blocksize);
+ hmac_init_ipad(axf, key, klen, &auth_ctx);
t4_copy_partial_hash(axf->type, &auth_ctx, dst);
dst += roundup2(partial_digest_len, 16);
- axf->Init(&auth_ctx);
- axf->Update(&auth_ctx, opad, axf->blocksize);
+
+ hmac_init_opad(axf, key, klen, &auth_ctx);
t4_copy_partial_hash(axf->type, &auth_ctx, dst);
+
+ explicit_bzero(&auth_ctx, sizeof(auth_ctx));
}
/*
diff --git a/sys/dev/cxgbe/tom/t4_tls.c b/sys/dev/cxgbe/tom/t4_tls.c
index 57bae811f0f6..a82edd29bc3f 100644
--- a/sys/dev/cxgbe/tom/t4_tls.c
+++ b/sys/dev/cxgbe/tom/t4_tls.c
@@ -892,7 +892,7 @@ init_ktls_key_context(struct ktls_session *tls, struct tls_key_context *k_ctx)
k_ctx->tx_key_info_size += GMAC_BLOCK_LEN;
memcpy(k_ctx->tx.salt, tls->params.iv, SALT_SIZE);
t4_init_gmac_hash(tls->params.cipher_key,
- tls->params.cipher_key_len * 8, hash);
+ tls->params.cipher_key_len, hash);
} else {
switch (tls->params.auth_algorithm) {
case CRYPTO_SHA1_HMAC:
@@ -920,7 +920,7 @@ init_ktls_key_context(struct ktls_session *tls, struct tls_key_context *k_ctx)
k_ctx->tx_key_info_size += roundup2(mac_key_size, 16) * 2;
k_ctx->mac_secret_size = mac_key_size;
t4_init_hmac_digest(axf, mac_key_size, tls->params.auth_key,
- tls->params.auth_key_len * 8, hash);
+ tls->params.auth_key_len, hash);
}
k_ctx->frag_size = tls->params.max_frame_len;
diff --git a/sys/dev/glxsb/glxsb.c b/sys/dev/glxsb/glxsb.c
index 4d89e7d6756a..0e80b1dba2aa 100644
--- a/sys/dev/glxsb/glxsb.c
+++ b/sys/dev/glxsb/glxsb.c
@@ -51,7 +51,6 @@ __FBSDID("$FreeBSD$");
#include <dev/pci/pcireg.h>
#include <opencrypto/cryptodev.h>
-#include <opencrypto/cryptosoft.h>
#include <opencrypto/xform.h>
#include "cryptodev_if.h"
@@ -172,8 +171,6 @@ struct glxsb_dma_map {
struct glxsb_taskop {
struct glxsb_session *to_ses; /* crypto session */
struct cryptop *to_crp; /* cryptop to perfom */
- struct cryptodesc *to_enccrd; /* enccrd to perform */
- struct cryptodesc *to_maccrd; /* maccrd to perform */
};
struct glxsb_softc {
@@ -204,13 +201,16 @@ static void glxsb_dma_free(struct glxsb_softc *, struct glxsb_dma_map *);
static void glxsb_rnd(void *);
static int glxsb_crypto_setup(struct glxsb_softc *);
-static int glxsb_crypto_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int glxsb_crypto_probesession(device_t,
+ const struct crypto_session_params *);
+static int glxsb_crypto_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static void glxsb_crypto_freesession(device_t, crypto_session_t);
static int glxsb_aes(struct glxsb_softc *, uint32_t, uint32_t,
- uint32_t, void *, int, void *);
+ uint32_t, const void *, int, const void *);
-static int glxsb_crypto_encdec(struct cryptop *, struct cryptodesc *,
- struct glxsb_session *, struct glxsb_softc *);
+static int glxsb_crypto_encdec(struct cryptop *, struct glxsb_session *,
+ struct glxsb_softc *);
static void glxsb_crypto_task(void *, int);
static int glxsb_crypto_process(device_t, struct cryptop *, int);
@@ -222,6 +222,7 @@ static device_method_t glxsb_methods[] = {
DEVMETHOD(device_detach, glxsb_detach),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, glxsb_crypto_probesession),
DEVMETHOD(cryptodev_newsession, glxsb_crypto_newsession),
DEVMETHOD(cryptodev_freesession, glxsb_crypto_freesession),
DEVMETHOD(cryptodev_process, glxsb_crypto_process),
@@ -477,47 +478,24 @@ glxsb_crypto_setup(struct glxsb_softc *sc)
mtx_init(&sc->sc_task_mtx, "glxsb_crypto_mtx", NULL, MTX_DEF);
- if (crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_NULL_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_RIPEMD160_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_SHA2_256_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_SHA2_384_HMAC, 0, 0) != 0)
- goto crypto_fail;
- if (crypto_register(sc->sc_cid, CRYPTO_SHA2_512_HMAC, 0, 0) != 0)
- goto crypto_fail;
-
return (0);
-
-crypto_fail:
- device_printf(sc->sc_dev, "cannot register crypto\n");
- crypto_unregister_all(sc->sc_cid);
- mtx_destroy(&sc->sc_task_mtx);
- return (ENOMEM);
}
static int
-glxsb_crypto_newsession(device_t dev, crypto_session_t cses,
- struct cryptoini *cri)
+glxsb_crypto_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct glxsb_softc *sc = device_get_softc(dev);
- struct glxsb_session *ses;
- struct cryptoini *encini, *macini;
- int error;
- if (sc == NULL || cri == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
- encini = macini = NULL;
- for (; cri != NULL; cri = cri->cri_next) {
- switch(cri->cri_alg) {
+ /*
+ * We only support HMAC algorithms to be able to work with
+ * ipsec(4), so if we are asked only for authentication without
+ * encryption, don't pretend we can accelerate it.
+ */
+ switch (csp->csp_mode) {
+ case CSP_MODE_ETA:
+ switch (csp->csp_auth_alg) {
case CRYPTO_NULL_HMAC:
case CRYPTO_MD5_HMAC:
case CRYPTO_SHA1_HMAC:
@@ -525,43 +503,42 @@ glxsb_crypto_newsession(device_t dev, crypto_session_t cses,
case CRYPTO_SHA2_256_HMAC:
case CRYPTO_SHA2_384_HMAC:
case CRYPTO_SHA2_512_HMAC:
- if (macini != NULL)
- return (EINVAL);
- macini = cri;
break;
+ default:
+ return (EINVAL);
+ }
+ /* FALLTHROUGH */
+ case CSP_MODE_CIPHER:
+ switch (csp->csp_cipher_alg) {
case CRYPTO_AES_CBC:
- if (encini != NULL)
+ if (csp->csp_cipher_klen * 8 != 128)
return (EINVAL);
- encini = cri;
break;
default:
return (EINVAL);
}
+ default:
+ return (EINVAL);
}
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
- /*
- * We only support HMAC algorithms to be able to work with
- * ipsec(4), so if we are asked only for authentication without
- * encryption, don't pretend we can accellerate it.
- */
- if (encini == NULL)
- return (EINVAL);
+static int
+glxsb_crypto_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct glxsb_softc *sc = device_get_softc(dev);
+ struct glxsb_session *ses;
+ int error;
ses = crypto_get_driver_session(cses);
- if (encini->cri_alg == CRYPTO_AES_CBC) {
- if (encini->cri_klen != 128) {
- glxsb_crypto_freesession(sc->sc_dev, cses);
- return (EINVAL);
- }
- arc4rand(ses->ses_iv, sizeof(ses->ses_iv), 0);
- ses->ses_klen = encini->cri_klen;
- /* Copy the key (Geode LX wants the primary key only) */
- bcopy(encini->cri_key, ses->ses_key, sizeof(ses->ses_key));
- }
+ /* Copy the key (Geode LX wants the primary key only) */
+ if (csp->csp_cipher_key != NULL)
+ bcopy(csp->csp_cipher_key, ses->ses_key, sizeof(ses->ses_key));
- if (macini != NULL) {
- error = glxsb_hash_setup(ses, macini);
+ if (csp->csp_auth_alg != 0) {
+ error = glxsb_hash_setup(ses, csp);
if (error != 0) {
glxsb_crypto_freesession(sc->sc_dev, cses);
return (error);
@@ -574,19 +551,15 @@ glxsb_crypto_newsession(device_t dev, crypto_session_t cses,
static void
glxsb_crypto_freesession(device_t dev, crypto_session_t cses)
{
- struct glxsb_softc *sc = device_get_softc(dev);
struct glxsb_session *ses;
- if (sc == NULL)
- return;
-
ses = crypto_get_driver_session(cses);
glxsb_hash_free(ses);
}
static int
glxsb_aes(struct glxsb_softc *sc, uint32_t control, uint32_t psrc,
- uint32_t pdst, void *key, int len, void *iv)
+ uint32_t pdst, const void *key, int len, const void *iv)
{
uint32_t status;
int i;
@@ -652,23 +625,24 @@ glxsb_aes(struct glxsb_softc *sc, uint32_t control, uint32_t psrc,
}
static int
-glxsb_crypto_encdec(struct cryptop *crp, struct cryptodesc *crd,
- struct glxsb_session *ses, struct glxsb_softc *sc)
+glxsb_crypto_encdec(struct cryptop *crp, struct glxsb_session *ses,
+ struct glxsb_softc *sc)
{
char *op_src, *op_dst;
+ const void *key;
uint32_t op_psrc, op_pdst;
- uint8_t op_iv[SB_AES_BLOCK_SIZE], *piv;
+ uint8_t op_iv[SB_AES_BLOCK_SIZE];
int error;
int len, tlen, xlen;
int offset;
uint32_t control;
- if (crd == NULL || (crd->crd_len % SB_AES_BLOCK_SIZE) != 0)
+ if ((crp->crp_payload_length % SB_AES_BLOCK_SIZE) != 0)
return (EINVAL);
/* How much of our buffer will we need to use? */
- xlen = crd->crd_len > GLXSB_MAX_AES_LEN ?
- GLXSB_MAX_AES_LEN : crd->crd_len;
+ xlen = crp->crp_payload_length > GLXSB_MAX_AES_LEN ?
+ GLXSB_MAX_AES_LEN : crp->crp_payload_length;
/*
* XXX Check if we can have input == output on Geode LX.
@@ -680,73 +654,57 @@ glxsb_crypto_encdec(struct cryptop *crp, struct cryptodesc *crd,
op_psrc = sc->sc_dma.dma_paddr;
op_pdst = sc->sc_dma.dma_paddr + xlen;
- if (crd->crd_flags & CRD_F_ENCRYPT) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
control = SB_CTL_ENC;
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(crd->crd_iv, op_iv, sizeof(op_iv));
- else
- bcopy(ses->ses_iv, op_iv, sizeof(op_iv));
-
- if ((crd->crd_flags & CRD_F_IV_PRESENT) == 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, sizeof(op_iv), op_iv);
- }
- } else {
+ else
control = SB_CTL_DEC;
- if (crd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(crd->crd_iv, op_iv, sizeof(op_iv));
- else {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, sizeof(op_iv), op_iv);
- }
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(op_iv, sizeof(op_iv), 0);
+ crypto_copyback(crp, crp->crp_iv_start, sizeof(op_iv), op_iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(op_iv, crp->crp_iv, sizeof(op_iv));
+ else
+ crypto_copydata(crp, crp->crp_iv_start, sizeof(op_iv), op_iv);
+
offset = 0;
- tlen = crd->crd_len;
- piv = op_iv;
+ tlen = crp->crp_payload_length;
+
+ if (crp->crp_cipher_key != NULL)
+ key = crp->crp_cipher_key;
+ else
+ key = ses->ses_key;
/* Process the data in GLXSB_MAX_AES_LEN chunks */
while (tlen > 0) {
len = (tlen > GLXSB_MAX_AES_LEN) ? GLXSB_MAX_AES_LEN : tlen;
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_skip + offset, len, op_src);
+ crypto_copydata(crp, crp->crp_payload_start + offset, len,
+ op_src);
glxsb_dma_pre_op(sc, &sc->sc_dma);
- error = glxsb_aes(sc, control, op_psrc, op_pdst, ses->ses_key,
- len, op_iv);
+ error = glxsb_aes(sc, control, op_psrc, op_pdst, key, len,
+ op_iv);
glxsb_dma_post_op(sc, &sc->sc_dma);
if (error != 0)
return (error);
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_skip + offset, len, op_dst);
+ crypto_copyback(crp, crp->crp_payload_start + offset, len,
+ op_dst);
offset += len;
tlen -= len;
- if (tlen <= 0) { /* Ideally, just == 0 */
- /* Finished - put the IV in session IV */
- piv = ses->ses_iv;
- }
-
/*
- * Copy out last block for use as next iteration/session IV.
- *
- * piv is set to op_iv[] before the loop starts, but is
- * set to ses->ses_iv if we're going to exit the loop this
- * time.
+ * Copy out last block for use as next iteration IV.
*/
- if (crd->crd_flags & CRD_F_ENCRYPT)
- bcopy(op_dst + len - sizeof(op_iv), piv, sizeof(op_iv));
- else {
- /* Decryption, only need this if another iteration */
- if (tlen > 0) {
- bcopy(op_src + len - sizeof(op_iv), piv,
- sizeof(op_iv));
- }
- }
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
+ bcopy(op_dst + len - sizeof(op_iv), op_iv,
+ sizeof(op_iv));
+ else
+ bcopy(op_src + len - sizeof(op_iv), op_iv,
+ sizeof(op_iv));
} /* while */
/* All AES processing has now been done. */
@@ -759,30 +717,31 @@ static void
glxsb_crypto_task(void *arg, int pending)
{
struct glxsb_softc *sc = arg;
+ const struct crypto_session_params *csp;
struct glxsb_session *ses;
struct cryptop *crp;
- struct cryptodesc *enccrd, *maccrd;
int error;
- maccrd = sc->sc_to.to_maccrd;
- enccrd = sc->sc_to.to_enccrd;
crp = sc->sc_to.to_crp;
ses = sc->sc_to.to_ses;
+ csp = crypto_get_params(crp->crp_session);
/* Perform data authentication if requested before encryption */
- if (maccrd != NULL && maccrd->crd_next == enccrd) {
- error = glxsb_hash_process(ses, maccrd, crp);
+ if (csp->csp_mode == CSP_MODE_ETA &&
+ !CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ error = glxsb_hash_process(ses, csp, crp);
if (error != 0)
goto out;
}
- error = glxsb_crypto_encdec(crp, enccrd, ses, sc);
+ error = glxsb_crypto_encdec(crp, ses, sc);
if (error != 0)
goto out;
/* Perform data authentication if requested after encryption */
- if (maccrd != NULL && enccrd->crd_next == maccrd) {
- error = glxsb_hash_process(ses, maccrd, crp);
+ if (csp->csp_mode == CSP_MODE_ETA &&
+ CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
+ error = glxsb_hash_process(ses, csp, crp);
if (error != 0)
goto out;
}
@@ -801,52 +760,6 @@ glxsb_crypto_process(device_t dev, struct cryptop *crp, int hint)
{
struct glxsb_softc *sc = device_get_softc(dev);
struct glxsb_session *ses;
- struct cryptodesc *crd, *enccrd, *maccrd;
- int error = 0;
-
- enccrd = maccrd = NULL;
-
- /* Sanity check. */
- if (crp == NULL)
- return (EINVAL);
-
- if (crp->crp_callback == NULL || crp->crp_desc == NULL) {
- error = EINVAL;
- goto fail;
- }
-
- for (crd = crp->crp_desc; crd != NULL; crd = crd->crd_next) {
- switch (crd->crd_alg) {
- case CRYPTO_NULL_HMAC:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1_HMAC:
- case CRYPTO_RIPEMD160_HMAC:
- case CRYPTO_SHA2_256_HMAC:
- case CRYPTO_SHA2_384_HMAC:
- case CRYPTO_SHA2_512_HMAC:
- if (maccrd != NULL) {
- error = EINVAL;
- goto fail;
- }
- maccrd = crd;
- break;
- case CRYPTO_AES_CBC:
- if (enccrd != NULL) {
- error = EINVAL;
- goto fail;
- }
- enccrd = crd;
- break;
- default:
- error = EINVAL;
- goto fail;
- }
- }
-
- if (enccrd == NULL || enccrd->crd_len % AES_BLOCK_LEN != 0) {
- error = EINVAL;
- goto fail;
- }
ses = crypto_get_driver_session(crp->crp_session);
@@ -857,17 +770,10 @@ glxsb_crypto_process(device_t dev, struct cryptop *crp, int hint)
}
sc->sc_task_count++;
- sc->sc_to.to_maccrd = maccrd;
- sc->sc_to.to_enccrd = enccrd;
sc->sc_to.to_crp = crp;
sc->sc_to.to_ses = ses;
mtx_unlock(&sc->sc_task_mtx);
taskqueue_enqueue(sc->sc_tq, &sc->sc_cryptotask);
return(0);
-
-fail:
- crp->crp_etype = error;
- crypto_done(crp);
- return (error);
}
diff --git a/sys/dev/glxsb/glxsb.h b/sys/dev/glxsb/glxsb.h
index fe5128a744c6..27e5bb44709c 100644
--- a/sys/dev/glxsb/glxsb.h
+++ b/sys/dev/glxsb/glxsb.h
@@ -37,8 +37,6 @@
struct glxsb_session {
uint32_t ses_key[4]; /* key */
- uint8_t ses_iv[SB_AES_BLOCK_SIZE]; /* initialization vector */
- int ses_klen; /* key len */
struct auth_hash *ses_axf;
uint8_t *ses_ictx;
uint8_t *ses_octx;
@@ -46,10 +44,10 @@ struct glxsb_session {
};
int glxsb_hash_setup(struct glxsb_session *ses,
- struct cryptoini *macini);
+ const struct crypto_session_params *csp);
int glxsb_hash_process(struct glxsb_session *ses,
- struct cryptodesc *maccrd, struct cryptop *crp);
+ const struct crypto_session_params *csp, struct cryptop *crp);
void glxsb_hash_free(struct glxsb_session *ses);
diff --git a/sys/dev/glxsb/glxsb_hash.c b/sys/dev/glxsb/glxsb_hash.c
index c5c2028103f2..73d9896ffc01 100644
--- a/sys/dev/glxsb/glxsb_hash.c
+++ b/sys/dev/glxsb/glxsb_hash.c
@@ -33,7 +33,6 @@ __FBSDID("$FreeBSD$");
#include <sys/systm.h>
#include <sys/malloc.h>
-#include <opencrypto/cryptosoft.h> /* for hmac_ipad_buffer and hmac_opad_buffer */
#include <opencrypto/xform.h>
#include "glxsb.h"
@@ -51,92 +50,66 @@ __FBSDID("$FreeBSD$");
MALLOC_DECLARE(M_GLXSB);
static void
-glxsb_hash_key_setup(struct glxsb_session *ses, caddr_t key, int klen)
+glxsb_hash_key_setup(struct glxsb_session *ses, const char *key, int klen)
{
struct auth_hash *axf;
- int i;
- klen /= 8;
axf = ses->ses_axf;
-
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_IPAD_VAL;
-
- axf->Init(ses->ses_ictx);
- axf->Update(ses->ses_ictx, key, klen);
- axf->Update(ses->ses_ictx, hmac_ipad_buffer, axf->blocksize - klen);
-
- for (i = 0; i < klen; i++)
- key[i] ^= (HMAC_IPAD_VAL ^ HMAC_OPAD_VAL);
-
- axf->Init(ses->ses_octx);
- axf->Update(ses->ses_octx, key, klen);
- axf->Update(ses->ses_octx, hmac_opad_buffer, axf->blocksize - klen);
-
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_OPAD_VAL;
+ hmac_init_ipad(axf, key, klen, ses->ses_ictx);
+ hmac_init_opad(axf, key, klen, ses->ses_octx);
}
/*
* Compute keyed-hash authenticator.
*/
static int
-glxsb_authcompute(struct glxsb_session *ses, struct cryptodesc *crd,
- caddr_t buf, int flags)
+glxsb_authcompute(struct glxsb_session *ses, struct cryptop *crp)
{
- u_char hash[HASH_MAX_LEN];
+ u_char hash[HASH_MAX_LEN], hash2[HASH_MAX_LEN];
struct auth_hash *axf;
union authctx ctx;
int error;
axf = ses->ses_axf;
bcopy(ses->ses_ictx, &ctx, axf->ctxsize);
- error = crypto_apply(flags, buf, crd->crd_skip, crd->crd_len,
+ error = crypto_apply(crp, crp->crp_aad_start, crp->crp_aad_length,
(int (*)(void *, void *, unsigned int))axf->Update, (caddr_t)&ctx);
if (error != 0)
return (error);
+ error = crypto_apply(crp, crp->crp_payload_start,
+ crp->crp_payload_length,
+ (int (*)(void *, void *, unsigned int))axf->Update, (caddr_t)&ctx);
+ if (error != 0)
+ return (error);
+
axf->Final(hash, &ctx);
bcopy(ses->ses_octx, &ctx, axf->ctxsize);
axf->Update(&ctx, hash, axf->hashsize);
axf->Final(hash, &ctx);
- /* Inject the authentication data */
- crypto_copyback(flags, buf, crd->crd_inject,
- ses->ses_mlen == 0 ? axf->hashsize : ses->ses_mlen, hash);
+ /* Verify or inject the authentication data */
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start, ses->ses_mlen,
+ hash2);
+ if (timingsafe_bcmp(hash, hash2, ses->ses_mlen) != 0)
+ return (EBADMSG);
+ } else
+ crypto_copyback(crp, crp->crp_digest_start, ses->ses_mlen,
+ hash);
return (0);
}
int
-glxsb_hash_setup(struct glxsb_session *ses, struct cryptoini *macini)
+glxsb_hash_setup(struct glxsb_session *ses,
+ const struct crypto_session_params *csp)
{
- ses->ses_mlen = macini->cri_mlen;
-
- /* Find software structure which describes HMAC algorithm. */
- switch (macini->cri_alg) {
- case CRYPTO_NULL_HMAC:
- ses->ses_axf = &auth_hash_null;
- break;
- case CRYPTO_MD5_HMAC:
- ses->ses_axf = &auth_hash_hmac_md5;
- break;
- case CRYPTO_SHA1_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha1;
- break;
- case CRYPTO_RIPEMD160_HMAC:
- ses->ses_axf = &auth_hash_hmac_ripemd_160;
- break;
- case CRYPTO_SHA2_256_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha2_256;
- break;
- case CRYPTO_SHA2_384_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha2_384;
- break;
- case CRYPTO_SHA2_512_HMAC:
- ses->ses_axf = &auth_hash_hmac_sha2_512;
- break;
- }
+ ses->ses_axf = crypto_auth_hash(csp);
+ if (csp->csp_auth_mlen == 0)
+ ses->ses_mlen = ses->ses_axf->hashsize;
+ else
+ ses->ses_mlen = csp->csp_auth_mlen;
/* Allocate memory for HMAC inner and outer contexts. */
ses->ses_ictx = malloc(ses->ses_axf->ctxsize, M_GLXSB,
@@ -147,23 +120,24 @@ glxsb_hash_setup(struct glxsb_session *ses, struct cryptoini *macini)
return (ENOMEM);
/* Setup key if given. */
- if (macini->cri_key != NULL) {
- glxsb_hash_key_setup(ses, macini->cri_key,
- macini->cri_klen);
+ if (csp->csp_auth_key != NULL) {
+ glxsb_hash_key_setup(ses, csp->csp_auth_key,
+ csp->csp_auth_klen);
}
return (0);
}
int
-glxsb_hash_process(struct glxsb_session *ses, struct cryptodesc *maccrd,
- struct cryptop *crp)
+glxsb_hash_process(struct glxsb_session *ses,
+ const struct crypto_session_params *csp, struct cryptop *crp)
{
int error;
- if ((maccrd->crd_flags & CRD_F_KEY_EXPLICIT) != 0)
- glxsb_hash_key_setup(ses, maccrd->crd_key, maccrd->crd_klen);
+ if (crp->crp_auth_key != NULL)
+ glxsb_hash_key_setup(ses, crp->crp_auth_key,
+ csp->csp_auth_klen);
- error = glxsb_authcompute(ses, maccrd, crp->crp_buf, crp->crp_flags);
+ error = glxsb_authcompute(ses, crp);
return (error);
}
diff --git a/sys/dev/hifn/hifn7751.c b/sys/dev/hifn/hifn7751.c
index ce0f060aa7d2..7f1889767090 100644
--- a/sys/dev/hifn/hifn7751.c
+++ b/sys/dev/hifn/hifn7751.c
@@ -61,6 +61,7 @@ __FBSDID("$FreeBSD$");
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/sysctl.h>
+#include <sys/uio.h>
#include <vm/vm.h>
#include <vm/pmap.h>
@@ -71,6 +72,7 @@ __FBSDID("$FreeBSD$");
#include <sys/rman.h>
#include <opencrypto/cryptodev.h>
+#include <opencrypto/xform_auth.h>
#include <sys/random.h>
#include <sys/kobj.h>
@@ -102,7 +104,9 @@ static int hifn_suspend(device_t);
static int hifn_resume(device_t);
static int hifn_shutdown(device_t);
-static int hifn_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int hifn_probesession(device_t, const struct crypto_session_params *);
+static int hifn_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static int hifn_process(device_t, struct cryptop *, int);
static device_method_t hifn_methods[] = {
@@ -115,6 +119,7 @@ static device_method_t hifn_methods[] = {
DEVMETHOD(device_shutdown, hifn_shutdown),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, hifn_probesession),
DEVMETHOD(cryptodev_newsession, hifn_newsession),
DEVMETHOD(cryptodev_process, hifn_process),
@@ -356,7 +361,7 @@ hifn_attach(device_t dev)
caddr_t kva;
int rseg, rid;
char rbase;
- u_int16_t ena, rev;
+ uint16_t rev;
sc->sc_dev = dev;
@@ -558,33 +563,22 @@ hifn_attach(device_t dev)
2 + 2*((sc->sc_pllconfig & HIFN_PLL_ND) >> 11));
printf("\n");
- sc->sc_cid = crypto_get_driverid(dev, sizeof(struct hifn_session),
- CRYPTOCAP_F_HARDWARE);
- if (sc->sc_cid < 0) {
- device_printf(dev, "could not get crypto driver id\n");
- goto fail_intr;
- }
-
WRITE_REG_0(sc, HIFN_0_PUCNFG,
READ_REG_0(sc, HIFN_0_PUCNFG) | HIFN_PUCNFG_CHIPID);
- ena = READ_REG_0(sc, HIFN_0_PUSTAT) & HIFN_PUSTAT_CHIPENA;
+ sc->sc_ena = READ_REG_0(sc, HIFN_0_PUSTAT) & HIFN_PUSTAT_CHIPENA;
- switch (ena) {
+ switch (sc->sc_ena) {
case HIFN_PUSTAT_ENA_2:
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_ARC4, 0, 0);
- if (sc->sc_flags & HIFN_HAS_AES)
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
- /*FALLTHROUGH*/
case HIFN_PUSTAT_ENA_1:
- crypto_register(sc->sc_cid, CRYPTO_MD5, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
+ sc->sc_cid = crypto_get_driverid(dev,
+ sizeof(struct hifn_session), CRYPTOCAP_F_HARDWARE);
+ if (sc->sc_cid < 0) {
+ device_printf(dev, "could not get crypto driver id\n");
+ goto fail_intr;
+ }
break;
}
-
+
bus_dmamap_sync(sc->sc_dmat, sc->sc_dmamap,
BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
@@ -1547,6 +1541,7 @@ hifn_init_dma(struct hifn_softc *sc)
static u_int
hifn_write_command(struct hifn_command *cmd, u_int8_t *buf)
{
+ struct cryptop *crp;
u_int8_t *buf_pos;
hifn_base_command_t *base_cmd;
hifn_mac_command_t *mac_cmd;
@@ -1554,6 +1549,7 @@ hifn_write_command(struct hifn_command *cmd, u_int8_t *buf)
int using_mac, using_crypt, len, ivlen;
u_int32_t dlen, slen;
+ crp = cmd->crp;
buf_pos = buf;
using_mac = cmd->base_masks & HIFN_BASE_CMD_MAC;
using_crypt = cmd->base_masks & HIFN_BASE_CMD_CRYPT;
@@ -1576,24 +1572,27 @@ hifn_write_command(struct hifn_command *cmd, u_int8_t *buf)
if (using_mac) {
mac_cmd = (hifn_mac_command_t *)buf_pos;
- dlen = cmd->maccrd->crd_len;
+ dlen = crp->crp_aad_length + crp->crp_payload_length;
mac_cmd->source_count = htole16(dlen & 0xffff);
dlen >>= 16;
mac_cmd->masks = htole16(cmd->mac_masks |
((dlen << HIFN_MAC_CMD_SRCLEN_S) & HIFN_MAC_CMD_SRCLEN_M));
- mac_cmd->header_skip = htole16(cmd->maccrd->crd_skip);
+ if (crp->crp_aad_length != 0)
+ mac_cmd->header_skip = htole16(crp->crp_aad_start);
+ else
+ mac_cmd->header_skip = htole16(crp->crp_payload_start);
mac_cmd->reserved = 0;
buf_pos += sizeof(hifn_mac_command_t);
}
if (using_crypt) {
cry_cmd = (hifn_crypt_command_t *)buf_pos;
- dlen = cmd->enccrd->crd_len;
+ dlen = crp->crp_payload_length;
cry_cmd->source_count = htole16(dlen & 0xffff);
dlen >>= 16;
cry_cmd->masks = htole16(cmd->cry_masks |
((dlen << HIFN_CRYPT_CMD_SRCLEN_S) & HIFN_CRYPT_CMD_SRCLEN_M));
- cry_cmd->header_skip = htole16(cmd->enccrd->crd_skip);
+ cry_cmd->header_skip = htole16(crp->crp_payload_length);
cry_cmd->reserved = 0;
buf_pos += sizeof(hifn_crypt_command_t);
}
@@ -1782,15 +1781,30 @@ hifn_dmamap_load_src(struct hifn_softc *sc, struct hifn_command *cmd)
return (idx);
}
+static bus_size_t
+hifn_crp_length(struct cryptop *crp)
+{
+
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ return (crp->crp_mbuf->m_pkthdr.len);
+ case CRYPTO_BUF_UIO:
+ return (crp->crp_uio->uio_resid);
+ case CRYPTO_BUF_CONTIG:
+ return (crp->crp_ilen);
+ default:
+ panic("bad crp buffer type");
+ }
+}
+
static void
-hifn_op_cb(void* arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, int error)
+hifn_op_cb(void* arg, bus_dma_segment_t *seg, int nsegs, int error)
{
struct hifn_operand *op = arg;
KASSERT(nsegs <= MAX_SCATTER,
("hifn_op_cb: too many DMA segments (%u > %u) "
"returned when mapping operand", nsegs, MAX_SCATTER));
- op->mapsize = mapsize;
op->nsegs = nsegs;
bcopy(seg, op->segs, nsegs * sizeof (seg[0]));
}
@@ -1832,130 +1846,110 @@ hifn_crypto(
return (ENOMEM);
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (bus_dmamap_load_mbuf(sc->sc_dmat, cmd->src_map,
- cmd->src_m, hifn_op_cb, &cmd->src, BUS_DMA_NOWAIT)) {
- hifnstats.hst_nomem_load++;
- err = ENOMEM;
- goto err_srcmap1;
- }
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- if (bus_dmamap_load_uio(sc->sc_dmat, cmd->src_map,
- cmd->src_io, hifn_op_cb, &cmd->src, BUS_DMA_NOWAIT)) {
- hifnstats.hst_nomem_load++;
- err = ENOMEM;
- goto err_srcmap1;
- }
- } else {
- err = EINVAL;
+ if (bus_dmamap_load_crp(sc->sc_dmat, cmd->src_map, crp, hifn_op_cb,
+ &cmd->src, BUS_DMA_NOWAIT)) {
+ hifnstats.hst_nomem_load++;
+ err = ENOMEM;
goto err_srcmap1;
}
+ cmd->src_mapsize = hifn_crp_length(crp);
if (hifn_dmamap_aligned(&cmd->src)) {
cmd->sloplen = cmd->src_mapsize & 3;
cmd->dst = cmd->src;
- } else {
- if (crp->crp_flags & CRYPTO_F_IOV) {
- err = EINVAL;
- goto err_srcmap;
- } else if (crp->crp_flags & CRYPTO_F_IMBUF) {
- int totlen, len;
- struct mbuf *m, *m0, *mlast;
+ } else if (crp->crp_buf_type == CRYPTO_BUF_MBUF) {
+ int totlen, len;
+ struct mbuf *m, *m0, *mlast;
- KASSERT(cmd->dst_m == cmd->src_m,
- ("hifn_crypto: dst_m initialized improperly"));
- hifnstats.hst_unaligned++;
- /*
- * Source is not aligned on a longword boundary.
- * Copy the data to insure alignment. If we fail
- * to allocate mbufs or clusters while doing this
- * we return ERESTART so the operation is requeued
- * at the crypto later, but only if there are
- * ops already posted to the hardware; otherwise we
- * have no guarantee that we'll be re-entered.
- */
- totlen = cmd->src_mapsize;
- if (cmd->src_m->m_flags & M_PKTHDR) {
- len = MHLEN;
- MGETHDR(m0, M_NOWAIT, MT_DATA);
- if (m0 && !m_dup_pkthdr(m0, cmd->src_m, M_NOWAIT)) {
- m_free(m0);
- m0 = NULL;
- }
- } else {
- len = MLEN;
- MGET(m0, M_NOWAIT, MT_DATA);
+ KASSERT(cmd->dst_m == NULL,
+ ("hifn_crypto: dst_m initialized improperly"));
+ hifnstats.hst_unaligned++;
+
+ /*
+ * Source is not aligned on a longword boundary.
+ * Copy the data to insure alignment. If we fail
+ * to allocate mbufs or clusters while doing this
+ * we return ERESTART so the operation is requeued
+ * at the crypto later, but only if there are
+ * ops already posted to the hardware; otherwise we
+ * have no guarantee that we'll be re-entered.
+ */
+ totlen = cmd->src_mapsize;
+ if (crp->crp_mbuf->m_flags & M_PKTHDR) {
+ len = MHLEN;
+ MGETHDR(m0, M_NOWAIT, MT_DATA);
+ if (m0 && !m_dup_pkthdr(m0, crp->crp_mbuf, M_NOWAIT)) {
+ m_free(m0);
+ m0 = NULL;
}
- if (m0 == NULL) {
+ } else {
+ len = MLEN;
+ MGET(m0, M_NOWAIT, MT_DATA);
+ }
+ if (m0 == NULL) {
+ hifnstats.hst_nomem_mbuf++;
+ err = sc->sc_cmdu ? ERESTART : ENOMEM;
+ goto err_srcmap;
+ }
+ if (totlen >= MINCLSIZE) {
+ if (!(MCLGET(m0, M_NOWAIT))) {
+ hifnstats.hst_nomem_mcl++;
+ err = sc->sc_cmdu ? ERESTART : ENOMEM;
+ m_freem(m0);
+ goto err_srcmap;
+ }
+ len = MCLBYTES;
+ }
+ totlen -= len;
+ m0->m_pkthdr.len = m0->m_len = len;
+ mlast = m0;
+
+ while (totlen > 0) {
+ MGET(m, M_NOWAIT, MT_DATA);
+ if (m == NULL) {
hifnstats.hst_nomem_mbuf++;
err = sc->sc_cmdu ? ERESTART : ENOMEM;
+ m_freem(m0);
goto err_srcmap;
}
+ len = MLEN;
if (totlen >= MINCLSIZE) {
- if (!(MCLGET(m0, M_NOWAIT))) {
+ if (!(MCLGET(m, M_NOWAIT))) {
hifnstats.hst_nomem_mcl++;
err = sc->sc_cmdu ? ERESTART : ENOMEM;
+ mlast->m_next = m;
m_freem(m0);
goto err_srcmap;
}
len = MCLBYTES;
}
- totlen -= len;
- m0->m_pkthdr.len = m0->m_len = len;
- mlast = m0;
- while (totlen > 0) {
- MGET(m, M_NOWAIT, MT_DATA);
- if (m == NULL) {
- hifnstats.hst_nomem_mbuf++;
- err = sc->sc_cmdu ? ERESTART : ENOMEM;
- m_freem(m0);
- goto err_srcmap;
- }
- len = MLEN;
- if (totlen >= MINCLSIZE) {
- if (!(MCLGET(m, M_NOWAIT))) {
- hifnstats.hst_nomem_mcl++;
- err = sc->sc_cmdu ? ERESTART : ENOMEM;
- mlast->m_next = m;
- m_freem(m0);
- goto err_srcmap;
- }
- len = MCLBYTES;
- }
-
- m->m_len = len;
- m0->m_pkthdr.len += len;
- totlen -= len;
+ m->m_len = len;
+ m0->m_pkthdr.len += len;
+ totlen -= len;
- mlast->m_next = m;
- mlast = m;
- }
- cmd->dst_m = m0;
+ mlast->m_next = m;
+ mlast = m;
}
- }
+ cmd->dst_m = m0;
- if (cmd->dst_map == NULL) {
- if (bus_dmamap_create(sc->sc_dmat, BUS_DMA_NOWAIT, &cmd->dst_map)) {
+ if (bus_dmamap_create(sc->sc_dmat, BUS_DMA_NOWAIT,
+ &cmd->dst_map)) {
hifnstats.hst_nomem_map++;
err = ENOMEM;
goto err_srcmap;
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (bus_dmamap_load_mbuf(sc->sc_dmat, cmd->dst_map,
- cmd->dst_m, hifn_op_cb, &cmd->dst, BUS_DMA_NOWAIT)) {
- hifnstats.hst_nomem_map++;
- err = ENOMEM;
- goto err_dstmap1;
- }
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- if (bus_dmamap_load_uio(sc->sc_dmat, cmd->dst_map,
- cmd->dst_io, hifn_op_cb, &cmd->dst, BUS_DMA_NOWAIT)) {
- hifnstats.hst_nomem_load++;
- err = ENOMEM;
- goto err_dstmap1;
- }
+
+ if (bus_dmamap_load_mbuf_sg(sc->sc_dmat, cmd->dst_map, m0,
+ cmd->dst_segs, &cmd->dst_nsegs, 0)) {
+ hifnstats.hst_nomem_map++;
+ err = ENOMEM;
+ goto err_dstmap1;
}
+ cmd->dst_mapsize = m0->m_pkthdr.len;
+ } else {
+ err = EINVAL;
+ goto err_srcmap;
}
#ifdef HIFN_DEBUG
@@ -2111,8 +2105,8 @@ err_dstmap1:
if (cmd->src_map != cmd->dst_map)
bus_dmamap_destroy(sc->sc_dmat, cmd->dst_map);
err_srcmap:
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (cmd->src_m != cmd->dst_m)
+ if (crp->crp_buf_type == CRYPTO_BUF_MBUF) {
+ if (cmd->dst_m != NULL)
m_freem(cmd->dst_m);
}
bus_dmamap_unload(sc->sc_dmat, cmd->src_map);
@@ -2307,67 +2301,121 @@ hifn_intr(void *arg)
}
}
-/*
- * Allocate a new 'session' and return an encoded session id. 'sidp'
- * contains our registration id, and should contain an encoded session
- * id on successful allocation.
- */
-static int
-hifn_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+static bool
+hifn_auth_supported(struct hifn_softc *sc,
+ const struct crypto_session_params *csp)
{
- struct hifn_softc *sc = device_get_softc(dev);
- struct cryptoini *c;
- int mac = 0, cry = 0;
- struct hifn_session *ses;
- KASSERT(sc != NULL, ("hifn_newsession: null softc"));
- if (cri == NULL || sc == NULL)
- return (EINVAL);
+ switch (sc->sc_ena) {
+ case HIFN_PUSTAT_ENA_2:
+ case HIFN_PUSTAT_ENA_1:
+ break;
+ default:
+ return (false);
+ }
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5:
+ case CRYPTO_SHA1:
+ break;
+ case CRYPTO_MD5_HMAC:
+ case CRYPTO_SHA1_HMAC:
+ if (csp->csp_auth_klen > HIFN_MAC_KEY_LENGTH)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
- ses = crypto_get_driver_session(cses);
+ return (true);
+}
- for (c = cri; c != NULL; c = c->cri_next) {
- switch (c->cri_alg) {
- case CRYPTO_MD5:
- case CRYPTO_SHA1:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1_HMAC:
- if (mac)
- return (EINVAL);
- mac = 1;
- ses->hs_mlen = c->cri_mlen;
- if (ses->hs_mlen == 0) {
- switch (c->cri_alg) {
- case CRYPTO_MD5:
- case CRYPTO_MD5_HMAC:
- ses->hs_mlen = 16;
- break;
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- ses->hs_mlen = 20;
- break;
- }
- }
- break;
- case CRYPTO_DES_CBC:
+static bool
+hifn_cipher_supported(struct hifn_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ if (csp->csp_cipher_klen == 0)
+ return (false);
+ if (csp->csp_ivlen > HIFN_MAX_IV_LENGTH)
+ return (false);
+ switch (sc->sc_ena) {
+ case HIFN_PUSTAT_ENA_2:
+ switch (csp->csp_cipher_alg) {
case CRYPTO_3DES_CBC:
- case CRYPTO_AES_CBC:
- /* XXX this may read fewer, does it matter? */
- read_random(ses->hs_iv,
- c->cri_alg == CRYPTO_AES_CBC ?
- HIFN_AES_IV_LENGTH : HIFN_IV_LENGTH);
- /*FALLTHROUGH*/
case CRYPTO_ARC4:
- if (cry)
- return (EINVAL);
- cry = 1;
break;
- default:
- return (EINVAL);
+ case CRYPTO_AES_CBC:
+ if ((sc->sc_flags & HIFN_HAS_AES) == 0)
+ return (false);
+ switch (csp->csp_cipher_klen) {
+ case 128:
+ case 192:
+ case 256:
+ break;
+ default:
+ return (false);
+ }
+ return (true);
}
+ /*FALLTHROUGH*/
+ case HIFN_PUSTAT_ENA_1:
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ return (true);
+ }
+ break;
}
- if (mac == 0 && cry == 0)
+ return (false);
+}
+
+static int
+hifn_probesession(device_t dev, const struct crypto_session_params *csp)
+{
+ struct hifn_softc *sc;
+
+ sc = device_get_softc(dev);
+ if (csp->csp_flags != 0)
return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!hifn_auth_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!hifn_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!hifn_auth_supported(sc, csp) ||
+ !hifn_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ default:
+ return (EINVAL);
+ }
+
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+/*
+ * Allocate a new 'session'.
+ */
+static int
+hifn_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct hifn_session *ses;
+
+ ses = crypto_get_driver_session(cses);
+
+ if (csp->csp_auth_alg != 0) {
+ if (csp->csp_auth_mlen == 0)
+ ses->hs_mlen = crypto_auth_hash(csp)->hashsize;
+ else
+ ses->hs_mlen = csp->csp_auth_mlen;
+ }
+
return (0);
}
@@ -2379,18 +2427,15 @@ hifn_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
static int
hifn_process(device_t dev, struct cryptop *crp, int hint)
{
+ const struct crypto_session_params *csp;
struct hifn_softc *sc = device_get_softc(dev);
struct hifn_command *cmd = NULL;
- int err, ivlen;
- struct cryptodesc *crd1, *crd2, *maccrd, *enccrd;
+ const void *mackey;
+ int err, ivlen, keylen;
struct hifn_session *ses;
- if (crp == NULL || crp->crp_callback == NULL) {
- hifnstats.hst_invalid++;
- return (EINVAL);
- }
-
ses = crypto_get_driver_session(crp->crp_session);
+
cmd = malloc(sizeof(struct hifn_command), M_DEVBUF, M_NOWAIT | M_ZERO);
if (cmd == NULL) {
hifnstats.hst_nomem++;
@@ -2398,80 +2443,26 @@ hifn_process(device_t dev, struct cryptop *crp, int hint)
goto errout;
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- cmd->src_m = (struct mbuf *)crp->crp_buf;
- cmd->dst_m = (struct mbuf *)crp->crp_buf;
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- cmd->src_io = (struct uio *)crp->crp_buf;
- cmd->dst_io = (struct uio *)crp->crp_buf;
- } else {
- err = EINVAL;
- goto errout; /* XXX we don't handle contiguous buffers! */
- }
+ csp = crypto_get_params(crp->crp_session);
- crd1 = crp->crp_desc;
- if (crd1 == NULL) {
+ /*
+ * The driver only supports ETA requests where there is no
+ * gap between the AAD and payload.
+ */
+ if (csp->csp_mode == CSP_MODE_ETA && crp->crp_aad_length != 0 &&
+ crp->crp_aad_start + crp->crp_aad_length !=
+ crp->crp_payload_start) {
err = EINVAL;
goto errout;
}
- crd2 = crd1->crd_next;
-
- if (crd2 == NULL) {
- if (crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1 ||
- crd1->crd_alg == CRYPTO_MD5) {
- maccrd = crd1;
- enccrd = NULL;
- } else if (crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC ||
- crd1->crd_alg == CRYPTO_ARC4) {
- if ((crd1->crd_flags & CRD_F_ENCRYPT) == 0)
- cmd->base_masks |= HIFN_BASE_CMD_DECODE;
- maccrd = NULL;
- enccrd = crd1;
- } else {
- err = EINVAL;
- goto errout;
- }
- } else {
- if ((crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_MD5 ||
- crd1->crd_alg == CRYPTO_SHA1) &&
- (crd2->crd_alg == CRYPTO_DES_CBC ||
- crd2->crd_alg == CRYPTO_3DES_CBC ||
- crd2->crd_alg == CRYPTO_AES_CBC ||
- crd2->crd_alg == CRYPTO_ARC4) &&
- ((crd2->crd_flags & CRD_F_ENCRYPT) == 0)) {
- cmd->base_masks = HIFN_BASE_CMD_DECODE;
- maccrd = crd1;
- enccrd = crd2;
- } else if ((crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_ARC4 ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC) &&
- (crd2->crd_alg == CRYPTO_MD5_HMAC ||
- crd2->crd_alg == CRYPTO_SHA1_HMAC ||
- crd2->crd_alg == CRYPTO_MD5 ||
- crd2->crd_alg == CRYPTO_SHA1) &&
- (crd1->crd_flags & CRD_F_ENCRYPT)) {
- enccrd = crd1;
- maccrd = crd2;
- } else {
- /*
- * We cannot order the 7751 as requested
- */
- err = EINVAL;
- goto errout;
- }
- }
- if (enccrd) {
- cmd->enccrd = enccrd;
+ switch (csp->csp_mode) {
+ case CSP_MODE_CIPHER:
+ case CSP_MODE_ETA:
+ if (!CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
+ cmd->base_masks |= HIFN_BASE_CMD_DECODE;
cmd->base_masks |= HIFN_BASE_CMD_CRYPT;
- switch (enccrd->crd_alg) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_ARC4:
cmd->cry_masks |= HIFN_CRYPT_CMD_ALG_RC4;
break;
@@ -2494,36 +2485,24 @@ hifn_process(device_t dev, struct cryptop *crp, int hint)
err = EINVAL;
goto errout;
}
- if (enccrd->crd_alg != CRYPTO_ARC4) {
- ivlen = ((enccrd->crd_alg == CRYPTO_AES_CBC) ?
- HIFN_AES_IV_LENGTH : HIFN_IV_LENGTH);
- if (enccrd->crd_flags & CRD_F_ENCRYPT) {
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(enccrd->crd_iv, cmd->iv, ivlen);
- else
- bcopy(ses->hs_iv, cmd->iv, ivlen);
-
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT)
- == 0) {
- crypto_copyback(crp->crp_flags,
- crp->crp_buf, enccrd->crd_inject,
- ivlen, cmd->iv);
- }
- } else {
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(enccrd->crd_iv, cmd->iv, ivlen);
- else {
- crypto_copydata(crp->crp_flags,
- crp->crp_buf, enccrd->crd_inject,
- ivlen, cmd->iv);
- }
- }
+ if (csp->csp_cipher_alg != CRYPTO_ARC4) {
+ ivlen = csp->csp_ivlen;
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(cmd->iv, ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, ivlen,
+ cmd->iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(cmd->iv, crp->crp_iv, ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, ivlen,
+ cmd->iv);
}
- if (enccrd->crd_flags & CRD_F_KEY_EXPLICIT)
- cmd->cry_masks |= HIFN_CRYPT_CMD_NEW_KEY;
- cmd->ck = enccrd->crd_key;
- cmd->cklen = enccrd->crd_klen >> 3;
+ if (crp->crp_cipher_key != NULL)
+ cmd->ck = crp->crp_cipher_key;
+ else
+ cmd->ck = csp->csp_cipher_key;
+ cmd->cklen = csp->csp_cipher_klen;
cmd->cry_masks |= HIFN_CRYPT_CMD_NEW_KEY;
/*
@@ -2546,13 +2525,15 @@ hifn_process(device_t dev, struct cryptop *crp, int hint)
goto errout;
}
}
+ break;
}
- if (maccrd) {
- cmd->maccrd = maccrd;
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ case CSP_MODE_ETA:
cmd->base_masks |= HIFN_BASE_CMD_MAC;
- switch (maccrd->crd_alg) {
+ switch (csp->csp_auth_alg) {
case CRYPTO_MD5:
cmd->mac_masks |= HIFN_MAC_CMD_ALG_MD5 |
HIFN_MAC_CMD_RESULT | HIFN_MAC_CMD_MODE_HASH |
@@ -2575,12 +2556,16 @@ hifn_process(device_t dev, struct cryptop *crp, int hint)
break;
}
- if (maccrd->crd_alg == CRYPTO_SHA1_HMAC ||
- maccrd->crd_alg == CRYPTO_MD5_HMAC) {
+ if (csp->csp_auth_alg == CRYPTO_SHA1_HMAC ||
+ csp->csp_auth_alg == CRYPTO_MD5_HMAC) {
cmd->mac_masks |= HIFN_MAC_CMD_NEW_KEY;
- bcopy(maccrd->crd_key, cmd->mac, maccrd->crd_klen >> 3);
- bzero(cmd->mac + (maccrd->crd_klen >> 3),
- HIFN_MAC_KEY_LENGTH - (maccrd->crd_klen >> 3));
+ if (crp->crp_auth_key != NULL)
+ mackey = crp->crp_auth_key;
+ else
+ mackey = csp->csp_auth_key;
+ keylen = csp->csp_auth_klen;
+ bcopy(mackey, cmd->mac, keylen);
+ bzero(cmd->mac + keylen, HIFN_MAC_KEY_LENGTH - keylen);
}
}
@@ -2655,9 +2640,8 @@ hifn_abort(struct hifn_softc *sc)
BUS_DMASYNC_POSTREAD);
}
- if (cmd->src_m != cmd->dst_m) {
- m_freem(cmd->src_m);
- crp->crp_buf = (caddr_t)cmd->dst_m;
+ if (cmd->dst_m != NULL) {
+ m_freem(cmd->dst_m);
}
/* non-shared buffers cannot be restarted */
@@ -2696,9 +2680,9 @@ hifn_callback(struct hifn_softc *sc, struct hifn_command *cmd, u_int8_t *macbuf)
{
struct hifn_dma *dma = sc->sc_dma;
struct cryptop *crp = cmd->crp;
- struct cryptodesc *crd;
+ uint8_t macbuf2[SHA1_HASH_LEN];
struct mbuf *m;
- int totlen, i, u, ivlen;
+ int totlen, i, u;
if (cmd->src_map == cmd->dst_map) {
bus_dmamap_sync(sc->sc_dmat, cmd->src_map,
@@ -2710,9 +2694,8 @@ hifn_callback(struct hifn_softc *sc, struct hifn_command *cmd, u_int8_t *macbuf)
BUS_DMASYNC_POSTREAD);
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (cmd->src_m != cmd->dst_m) {
- crp->crp_buf = (caddr_t)cmd->dst_m;
+ if (crp->crp_buf_type == CRYPTO_BUF_MBUF) {
+ if (cmd->dst_m != NULL) {
totlen = cmd->src_mapsize;
for (m = cmd->dst_m; m != NULL; m = m->m_next) {
if (totlen < m->m_len) {
@@ -2721,15 +2704,15 @@ hifn_callback(struct hifn_softc *sc, struct hifn_command *cmd, u_int8_t *macbuf)
} else
totlen -= m->m_len;
}
- cmd->dst_m->m_pkthdr.len = cmd->src_m->m_pkthdr.len;
- m_freem(cmd->src_m);
+ cmd->dst_m->m_pkthdr.len = crp->crp_mbuf->m_pkthdr.len;
+ m_freem(crp->crp_mbuf);
+ crp->crp_mbuf = cmd->dst_m;
}
}
if (cmd->sloplen != 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- cmd->src_mapsize - cmd->sloplen, cmd->sloplen,
- (caddr_t)&dma->slop[cmd->slopidx]);
+ crypto_copyback(crp, cmd->src_mapsize - cmd->sloplen,
+ cmd->sloplen, &dma->slop[cmd->slopidx]);
}
i = sc->sc_dstk; u = sc->sc_dstu;
@@ -2749,37 +2732,16 @@ hifn_callback(struct hifn_softc *sc, struct hifn_command *cmd, u_int8_t *macbuf)
hifnstats.hst_obytes += cmd->dst_mapsize;
- if ((cmd->base_masks & (HIFN_BASE_CMD_CRYPT | HIFN_BASE_CMD_DECODE)) ==
- HIFN_BASE_CMD_CRYPT) {
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- if (crd->crd_alg != CRYPTO_DES_CBC &&
- crd->crd_alg != CRYPTO_3DES_CBC &&
- crd->crd_alg != CRYPTO_AES_CBC)
- continue;
- ivlen = ((crd->crd_alg == CRYPTO_AES_CBC) ?
- HIFN_AES_IV_LENGTH : HIFN_IV_LENGTH);
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_skip + crd->crd_len - ivlen, ivlen,
- cmd->session->hs_iv);
- break;
- }
- }
-
if (macbuf != NULL) {
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- int len;
-
- if (crd->crd_alg != CRYPTO_MD5 &&
- crd->crd_alg != CRYPTO_SHA1 &&
- crd->crd_alg != CRYPTO_MD5_HMAC &&
- crd->crd_alg != CRYPTO_SHA1_HMAC) {
- continue;
- }
- len = cmd->session->hs_mlen;
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, len, macbuf);
- break;
- }
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start,
+ cmd->session->hs_mlen, macbuf2);
+ if (timingsafe_bcmp(macbuf, macbuf2,
+ cmd->session->hs_mlen) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp, crp->crp_digest_start,
+ cmd->session->hs_mlen, macbuf);
}
if (cmd->src_map != cmd->dst_map) {
diff --git a/sys/dev/hifn/hifn7751var.h b/sys/dev/hifn/hifn7751var.h
index e7ace8bcc977..ec0d96ec20a4 100644
--- a/sys/dev/hifn/hifn7751var.h
+++ b/sys/dev/hifn/hifn7751var.h
@@ -105,7 +105,6 @@ struct hifn_dma {
struct hifn_session {
- u_int8_t hs_iv[HIFN_MAX_IV_LENGTH];
int hs_mlen;
};
@@ -160,6 +159,7 @@ struct hifn_softc {
int sc_cmdk, sc_srck, sc_dstk, sc_resk;
int32_t sc_cid;
+ uint16_t sc_ena;
int sc_maxses;
int sc_ramsize;
int sc_flags;
@@ -257,10 +257,6 @@ struct hifn_softc {
*
*/
struct hifn_operand {
- union {
- struct mbuf *m;
- struct uio *io;
- } u;
bus_dmamap_t map;
bus_size_t mapsize;
int nsegs;
@@ -269,27 +265,24 @@ struct hifn_operand {
struct hifn_command {
struct hifn_session *session;
u_int16_t base_masks, cry_masks, mac_masks;
- u_int8_t iv[HIFN_MAX_IV_LENGTH], *ck, mac[HIFN_MAC_KEY_LENGTH];
+ u_int8_t iv[HIFN_MAX_IV_LENGTH], mac[HIFN_MAC_KEY_LENGTH];
+ const uint8_t *ck;
int cklen;
int sloplen, slopidx;
struct hifn_operand src;
struct hifn_operand dst;
+ struct mbuf *dst_m;
struct hifn_softc *softc;
struct cryptop *crp;
- struct cryptodesc *enccrd, *maccrd;
};
-#define src_m src.u.m
-#define src_io src.u.io
#define src_map src.map
#define src_mapsize src.mapsize
#define src_segs src.segs
#define src_nsegs src.nsegs
-#define dst_m dst.u.m
-#define dst_io dst.u.io
#define dst_map dst.map
#define dst_mapsize dst.mapsize
#define dst_segs dst.segs
diff --git a/sys/dev/safe/safe.c b/sys/dev/safe/safe.c
index 7a577dfd0a8c..99f16de56c50 100644
--- a/sys/dev/safe/safe.c
+++ b/sys/dev/safe/safe.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
#include <sys/mutex.h>
#include <sys/sysctl.h>
#include <sys/endian.h>
+#include <sys/uio.h>
#include <vm/vm.h>
#include <vm/pmap.h>
@@ -56,10 +57,8 @@ __FBSDID("$FreeBSD$");
#include <sys/bus.h>
#include <sys/rman.h>
-#include <crypto/sha1.h>
#include <opencrypto/cryptodev.h>
-#include <opencrypto/cryptosoft.h>
-#include <sys/md5.h>
+#include <opencrypto/xform_auth.h>
#include <sys/random.h>
#include <sys/kobj.h>
@@ -88,7 +87,9 @@ static int safe_suspend(device_t);
static int safe_resume(device_t);
static int safe_shutdown(device_t);
-static int safe_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int safe_probesession(device_t, const struct crypto_session_params *);
+static int safe_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static int safe_process(device_t, struct cryptop *, int);
static device_method_t safe_methods[] = {
@@ -101,6 +102,7 @@ static device_method_t safe_methods[] = {
DEVMETHOD(device_shutdown, safe_shutdown),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, safe_probesession),
DEVMETHOD(cryptodev_newsession, safe_newsession),
DEVMETHOD(cryptodev_process, safe_process),
@@ -221,7 +223,7 @@ safe_attach(device_t dev)
{
struct safe_softc *sc = device_get_softc(dev);
u_int32_t raddr;
- u_int32_t i, devinfo;
+ u_int32_t i;
int rid;
bzero(sc, sizeof (*sc));
@@ -374,12 +376,12 @@ safe_attach(device_t dev)
device_printf(sc->sc_dev, "%s", safe_partname(sc));
- devinfo = READ_REG(sc, SAFE_DEVINFO);
- if (devinfo & SAFE_DEVINFO_RNG) {
+ sc->sc_devinfo = READ_REG(sc, SAFE_DEVINFO);
+ if (sc->sc_devinfo & SAFE_DEVINFO_RNG) {
sc->sc_flags |= SAFE_FLAGS_RNG;
printf(" rng");
}
- if (devinfo & SAFE_DEVINFO_PKEY) {
+ if (sc->sc_devinfo & SAFE_DEVINFO_PKEY) {
#if 0
printf(" key");
sc->sc_flags |= SAFE_FLAGS_KEY;
@@ -387,26 +389,18 @@ safe_attach(device_t dev)
crypto_kregister(sc->sc_cid, CRK_MOD_EXP_CRT, 0);
#endif
}
- if (devinfo & SAFE_DEVINFO_DES) {
+ if (sc->sc_devinfo & SAFE_DEVINFO_DES) {
printf(" des/3des");
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
}
- if (devinfo & SAFE_DEVINFO_AES) {
+ if (sc->sc_devinfo & SAFE_DEVINFO_AES) {
printf(" aes");
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
}
- if (devinfo & SAFE_DEVINFO_MD5) {
+ if (sc->sc_devinfo & SAFE_DEVINFO_MD5) {
printf(" md5");
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
}
- if (devinfo & SAFE_DEVINFO_SHA1) {
+ if (sc->sc_devinfo & SAFE_DEVINFO_SHA1) {
printf(" sha1");
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
}
- printf(" null");
- crypto_register(sc->sc_cid, CRYPTO_NULL_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_NULL_HMAC, 0, 0);
/* XXX other supported algorithms */
printf("\n");
@@ -629,11 +623,11 @@ safe_feed(struct safe_softc *sc, struct safe_ringentry *re)
#define N(a) (sizeof(a) / sizeof (a[0]))
static void
-safe_setup_enckey(struct safe_session *ses, caddr_t key)
+safe_setup_enckey(struct safe_session *ses, const void *key)
{
int i;
- bcopy(key, ses->ses_key, ses->ses_klen / 8);
+ bcopy(key, ses->ses_key, ses->ses_klen);
/* PE is little-endian, insure proper byte order */
for (i = 0; i < N(ses->ses_key); i++)
@@ -641,47 +635,30 @@ safe_setup_enckey(struct safe_session *ses, caddr_t key)
}
static void
-safe_setup_mackey(struct safe_session *ses, int algo, caddr_t key, int klen)
+safe_setup_mackey(struct safe_session *ses, int algo, const uint8_t *key,
+ int klen)
{
MD5_CTX md5ctx;
SHA1_CTX sha1ctx;
int i;
-
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_IPAD_VAL;
-
if (algo == CRYPTO_MD5_HMAC) {
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, key, klen);
- MD5Update(&md5ctx, hmac_ipad_buffer, MD5_BLOCK_LEN - klen);
+ hmac_init_ipad(&auth_hash_hmac_md5, key, klen, &md5ctx);
bcopy(md5ctx.state, ses->ses_hminner, sizeof(md5ctx.state));
- } else {
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, key, klen);
- SHA1Update(&sha1ctx, hmac_ipad_buffer,
- SHA1_BLOCK_LEN - klen);
- bcopy(sha1ctx.h.b32, ses->ses_hminner, sizeof(sha1ctx.h.b32));
- }
-
- for (i = 0; i < klen; i++)
- key[i] ^= (HMAC_IPAD_VAL ^ HMAC_OPAD_VAL);
- if (algo == CRYPTO_MD5_HMAC) {
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, key, klen);
- MD5Update(&md5ctx, hmac_opad_buffer, MD5_BLOCK_LEN - klen);
+ hmac_init_opad(&auth_hash_hmac_md5, key, klen, &md5ctx);
bcopy(md5ctx.state, ses->ses_hmouter, sizeof(md5ctx.state));
+
+ explicit_bzero(&md5ctx, sizeof(md5ctx));
} else {
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, key, klen);
- SHA1Update(&sha1ctx, hmac_opad_buffer,
- SHA1_BLOCK_LEN - klen);
+ hmac_init_ipad(&auth_hash_hmac_sha1, key, klen, &sha1ctx);
+ bcopy(sha1ctx.h.b32, ses->ses_hminner, sizeof(sha1ctx.h.b32));
+
+ hmac_init_opad(&auth_hash_hmac_sha1, key, klen, &sha1ctx);
bcopy(sha1ctx.h.b32, ses->ses_hmouter, sizeof(sha1ctx.h.b32));
- }
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_OPAD_VAL;
+ explicit_bzero(&sha1ctx, sizeof(sha1ctx));
+ }
/* PE is little-endian, insure proper byte order */
for (i = 0; i < N(ses->ses_hminner); i++) {
@@ -691,98 +668,147 @@ safe_setup_mackey(struct safe_session *ses, int algo, caddr_t key, int klen)
}
#undef N
-/*
- * Allocate a new 'session' and return an encoded session id. 'sidp'
- * contains our registration id, and should contain an encoded session
- * id on successful allocation.
- */
+static bool
+safe_auth_supported(struct safe_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
+ if ((sc->sc_devinfo & SAFE_DEVINFO_MD5) == 0)
+ return (false);
+ break;
+ case CRYPTO_SHA1_HMAC:
+ if ((sc->sc_devinfo & SAFE_DEVINFO_SHA1) == 0)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+ return (true);
+}
+
+static bool
+safe_cipher_supported(struct safe_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ if ((sc->sc_devinfo & SAFE_DEVINFO_DES) == 0)
+ return (false);
+ if (csp->csp_ivlen != 8)
+ return (false);
+ if (csp->csp_cipher_alg == CRYPTO_DES_CBC) {
+ if (csp->csp_cipher_klen != 8)
+ return (false);
+ } else {
+ if (csp->csp_cipher_klen != 24)
+ return (false);
+ }
+ break;
+ case CRYPTO_AES_CBC:
+ if ((sc->sc_devinfo & SAFE_DEVINFO_AES) == 0)
+ return (false);
+ if (csp->csp_ivlen != 16)
+ return (false);
+ if (csp->csp_cipher_klen != 16 &&
+ csp->csp_cipher_klen != 24 &&
+ csp->csp_cipher_klen != 32)
+ return (false);
+ break;
+ }
+ return (true);
+}
+
static int
-safe_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+safe_probesession(device_t dev, const struct crypto_session_params *csp)
{
struct safe_softc *sc = device_get_softc(dev);
- struct cryptoini *c, *encini = NULL, *macini = NULL;
- struct safe_session *ses = NULL;
- if (cri == NULL || sc == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
-
- for (c = cri; c != NULL; c = c->cri_next) {
- if (c->cri_alg == CRYPTO_MD5_HMAC ||
- c->cri_alg == CRYPTO_SHA1_HMAC ||
- c->cri_alg == CRYPTO_NULL_HMAC) {
- if (macini)
- return (EINVAL);
- macini = c;
- } else if (c->cri_alg == CRYPTO_DES_CBC ||
- c->cri_alg == CRYPTO_3DES_CBC ||
- c->cri_alg == CRYPTO_AES_CBC ||
- c->cri_alg == CRYPTO_NULL_CBC) {
- if (encini)
- return (EINVAL);
- encini = c;
- } else
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!safe_auth_supported(sc, csp))
return (EINVAL);
- }
- if (encini == NULL && macini == NULL)
+ break;
+ case CSP_MODE_CIPHER:
+ if (!safe_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!safe_auth_supported(sc, csp) ||
+ !safe_cipher_supported(sc, csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
- if (encini) { /* validate key length */
- switch (encini->cri_alg) {
- case CRYPTO_DES_CBC:
- if (encini->cri_klen != 64)
- return (EINVAL);
- break;
- case CRYPTO_3DES_CBC:
- if (encini->cri_klen != 192)
- return (EINVAL);
- break;
- case CRYPTO_AES_CBC:
- if (encini->cri_klen != 128 &&
- encini->cri_klen != 192 &&
- encini->cri_klen != 256)
- return (EINVAL);
- break;
- }
}
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+/*
+ * Allocate a new 'session'.
+ */
+static int
+safe_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct safe_session *ses;
+
ses = crypto_get_driver_session(cses);
- if (encini) {
- /* get an IV */
- /* XXX may read fewer than requested */
- read_random(ses->ses_iv, sizeof(ses->ses_iv));
-
- ses->ses_klen = encini->cri_klen;
- if (encini->cri_key != NULL)
- safe_setup_enckey(ses, encini->cri_key);
+ if (csp->csp_cipher_alg != 0) {
+ ses->ses_klen = csp->csp_cipher_klen;
+ if (csp->csp_cipher_key != NULL)
+ safe_setup_enckey(ses, csp->csp_cipher_key);
}
- if (macini) {
- ses->ses_mlen = macini->cri_mlen;
+ if (csp->csp_auth_alg != 0) {
+ ses->ses_mlen = csp->csp_auth_mlen;
if (ses->ses_mlen == 0) {
- if (macini->cri_alg == CRYPTO_MD5_HMAC)
+ if (csp->csp_auth_alg == CRYPTO_MD5_HMAC)
ses->ses_mlen = MD5_HASH_LEN;
else
ses->ses_mlen = SHA1_HASH_LEN;
}
- if (macini->cri_key != NULL) {
- safe_setup_mackey(ses, macini->cri_alg, macini->cri_key,
- macini->cri_klen / 8);
+ if (csp->csp_auth_key != NULL) {
+ safe_setup_mackey(ses, csp->csp_auth_alg,
+ csp->csp_auth_key, csp->csp_auth_klen);
}
}
return (0);
}
+static bus_size_t
+safe_crp_length(struct cryptop *crp)
+{
+
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ return (crp->crp_mbuf->m_pkthdr.len);
+ case CRYPTO_BUF_UIO:
+ return (crp->crp_uio->uio_resid);
+ case CRYPTO_BUF_CONTIG:
+ return (crp->crp_ilen);
+ default:
+ panic("bad crp buffer type");
+ }
+}
+
static void
-safe_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, int error)
+safe_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, int error)
{
struct safe_operand *op = arg;
- DPRINTF(("%s: mapsize %u nsegs %d error %d\n", __func__,
- (u_int) mapsize, nsegs, error));
+ DPRINTF(("%s: nsegs %d error %d\n", __func__,
+ nsegs, error));
if (error != 0)
return;
- op->mapsize = mapsize;
op->nsegs = nsegs;
bcopy(seg, op->segs, nsegs * sizeof (seg[0]));
}
@@ -790,11 +816,10 @@ safe_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, int
static int
safe_process(device_t dev, struct cryptop *crp, int hint)
{
- struct safe_softc *sc = device_get_softc(dev);
+ struct safe_softc *sc = device_get_softc(dev);
+ const struct crypto_session_params *csp;
int err = 0, i, nicealign, uniform;
- struct cryptodesc *crd1, *crd2, *maccrd, *enccrd;
- int bypass, oplen, ivsize;
- caddr_t iv;
+ int bypass, oplen;
int16_t coffset;
struct safe_session *ses;
struct safe_ringentry *re;
@@ -802,11 +827,6 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
struct safe_pdesc *pd;
u_int32_t cmd0, cmd1, staterec;
- if (crp == NULL || crp->crp_callback == NULL || sc == NULL) {
- safestats.st_invalid++;
- return (EINVAL);
- }
-
mtx_lock(&sc->sc_ringmtx);
if (sc->sc_front == sc->sc_back && sc->sc_nqchip != 0) {
safestats.st_ringfull++;
@@ -823,104 +843,46 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
re->re_crp = crp;
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- re->re_src_m = (struct mbuf *)crp->crp_buf;
- re->re_dst_m = (struct mbuf *)crp->crp_buf;
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- re->re_src_io = (struct uio *)crp->crp_buf;
- re->re_dst_io = (struct uio *)crp->crp_buf;
- } else {
- safestats.st_badflags++;
- err = EINVAL;
- goto errout; /* XXX we don't handle contiguous blocks! */
- }
-
sa = &re->re_sa;
ses = crypto_get_driver_session(crp->crp_session);
-
- crd1 = crp->crp_desc;
- if (crd1 == NULL) {
- safestats.st_nodesc++;
- err = EINVAL;
- goto errout;
- }
- crd2 = crd1->crd_next;
+ csp = crypto_get_params(crp->crp_session);
cmd0 = SAFE_SA_CMD0_BASIC; /* basic group operation */
cmd1 = 0;
- if (crd2 == NULL) {
- if (crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_NULL_HMAC) {
- maccrd = crd1;
- enccrd = NULL;
- cmd0 |= SAFE_SA_CMD0_OP_HASH;
- } else if (crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC ||
- crd1->crd_alg == CRYPTO_NULL_CBC) {
- maccrd = NULL;
- enccrd = crd1;
- cmd0 |= SAFE_SA_CMD0_OP_CRYPT;
- } else {
- safestats.st_badalg++;
- err = EINVAL;
- goto errout;
- }
- } else {
- if ((crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_NULL_HMAC) &&
- (crd2->crd_alg == CRYPTO_DES_CBC ||
- crd2->crd_alg == CRYPTO_3DES_CBC ||
- crd2->crd_alg == CRYPTO_AES_CBC ||
- crd2->crd_alg == CRYPTO_NULL_CBC) &&
- ((crd2->crd_flags & CRD_F_ENCRYPT) == 0)) {
- maccrd = crd1;
- enccrd = crd2;
- } else if ((crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC ||
- crd1->crd_alg == CRYPTO_NULL_CBC) &&
- (crd2->crd_alg == CRYPTO_MD5_HMAC ||
- crd2->crd_alg == CRYPTO_SHA1_HMAC ||
- crd2->crd_alg == CRYPTO_NULL_HMAC) &&
- (crd1->crd_flags & CRD_F_ENCRYPT)) {
- enccrd = crd1;
- maccrd = crd2;
- } else {
- safestats.st_badalg++;
- err = EINVAL;
- goto errout;
- }
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ cmd0 |= SAFE_SA_CMD0_OP_HASH;
+ break;
+ case CSP_MODE_CIPHER:
+ cmd0 |= SAFE_SA_CMD0_OP_CRYPT;
+ break;
+ case CSP_MODE_ETA:
cmd0 |= SAFE_SA_CMD0_OP_BOTH;
+ break;
}
- if (enccrd) {
- if (enccrd->crd_flags & CRD_F_KEY_EXPLICIT)
- safe_setup_enckey(ses, enccrd->crd_key);
+ if (csp->csp_cipher_alg != 0) {
+ if (crp->crp_cipher_key != NULL)
+ safe_setup_enckey(ses, crp->crp_cipher_key);
- if (enccrd->crd_alg == CRYPTO_DES_CBC) {
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
cmd0 |= SAFE_SA_CMD0_DES;
cmd1 |= SAFE_SA_CMD1_CBC;
- ivsize = 2*sizeof(u_int32_t);
- } else if (enccrd->crd_alg == CRYPTO_3DES_CBC) {
+ break;
+ case CRYPTO_3DES_CBC:
cmd0 |= SAFE_SA_CMD0_3DES;
cmd1 |= SAFE_SA_CMD1_CBC;
- ivsize = 2*sizeof(u_int32_t);
- } else if (enccrd->crd_alg == CRYPTO_AES_CBC) {
+ break;
+ case CRYPTO_AES_CBC:
cmd0 |= SAFE_SA_CMD0_AES;
cmd1 |= SAFE_SA_CMD1_CBC;
- if (ses->ses_klen == 128)
+ if (ses->ses_klen * 8 == 128)
cmd1 |= SAFE_SA_CMD1_AES128;
- else if (ses->ses_klen == 192)
+ else if (ses->ses_klen * 8 == 192)
cmd1 |= SAFE_SA_CMD1_AES192;
else
cmd1 |= SAFE_SA_CMD1_AES256;
- ivsize = 4*sizeof(u_int32_t);
- } else {
- cmd0 |= SAFE_SA_CMD0_CRYPT_NULL;
- ivsize = 0;
}
/*
@@ -932,32 +894,28 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
* in the state record and set the hash/crypt offset to
* copy both the header+IV.
*/
- if (enccrd->crd_flags & CRD_F_ENCRYPT) {
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(re->re_sastate.sa_saved_iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen,
+ re->re_sastate.sa_saved_iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(re->re_sastate.sa_saved_iv, crp->crp_iv,
+ csp->csp_ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen,
+ re->re_sastate.sa_saved_iv);
+ cmd0 |= SAFE_SA_CMD0_IVLD_STATE;
+
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
cmd0 |= SAFE_SA_CMD0_OUTBOUND;
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT)
- iv = enccrd->crd_iv;
- else
- iv = (caddr_t) ses->ses_iv;
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivsize, iv);
- }
- bcopy(iv, re->re_sastate.sa_saved_iv, ivsize);
- cmd0 |= SAFE_SA_CMD0_IVLD_STATE | SAFE_SA_CMD0_SAVEIV;
- re->re_flags |= SAFE_QFLAGS_COPYOUTIV;
+ /*
+ * XXX: I suspect we don't need this since we
+ * don't save the returned IV.
+ */
+ cmd0 |= SAFE_SA_CMD0_SAVEIV;
} else {
cmd0 |= SAFE_SA_CMD0_INBOUND;
-
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT) {
- bcopy(enccrd->crd_iv,
- re->re_sastate.sa_saved_iv, ivsize);
- } else {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivsize,
- (caddr_t)re->re_sastate.sa_saved_iv);
- }
- cmd0 |= SAFE_SA_CMD0_IVLD_STATE;
}
/*
* For basic encryption use the zero pad algorithm.
@@ -973,21 +931,23 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
bcopy(ses->ses_key, sa->sa_key, sizeof(sa->sa_key));
}
- if (maccrd) {
- if (maccrd->crd_flags & CRD_F_KEY_EXPLICIT) {
- safe_setup_mackey(ses, maccrd->crd_alg,
- maccrd->crd_key, maccrd->crd_klen / 8);
+ if (csp->csp_auth_alg != 0) {
+ if (crp->crp_auth_key != NULL) {
+ safe_setup_mackey(ses, csp->csp_auth_alg,
+ crp->crp_auth_key, csp->csp_auth_klen);
}
- if (maccrd->crd_alg == CRYPTO_MD5_HMAC) {
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
cmd0 |= SAFE_SA_CMD0_MD5;
cmd1 |= SAFE_SA_CMD1_HMAC; /* NB: enable HMAC */
- } else if (maccrd->crd_alg == CRYPTO_SHA1_HMAC) {
+ break;
+ case CRYPTO_SHA1_HMAC:
cmd0 |= SAFE_SA_CMD0_SHA1;
cmd1 |= SAFE_SA_CMD1_HMAC; /* NB: enable HMAC */
- } else {
- cmd0 |= SAFE_SA_CMD0_HASH_NULL;
+ break;
}
+
/*
* Digest data is loaded from the SA and the hash
* result is saved to the state block where we
@@ -1003,38 +963,32 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
re->re_flags |= SAFE_QFLAGS_COPYOUTICV;
}
- if (enccrd && maccrd) {
+ if (csp->csp_mode == CSP_MODE_ETA) {
/*
- * The offset from hash data to the start of
- * crypt data is the difference in the skips.
+ * The driver only supports ETA requests where there
+ * is no gap between the AAD and payload.
*/
- bypass = maccrd->crd_skip;
- coffset = enccrd->crd_skip - maccrd->crd_skip;
- if (coffset < 0) {
- DPRINTF(("%s: hash does not precede crypt; "
- "mac skip %u enc skip %u\n",
- __func__, maccrd->crd_skip, enccrd->crd_skip));
- safestats.st_skipmismatch++;
- err = EINVAL;
- goto errout;
- }
- oplen = enccrd->crd_skip + enccrd->crd_len;
- if (maccrd->crd_skip + maccrd->crd_len != oplen) {
- DPRINTF(("%s: hash amount %u != crypt amount %u\n",
- __func__, maccrd->crd_skip + maccrd->crd_len,
- oplen));
+ if (crp->crp_aad_length != 0 &&
+ crp->crp_aad_start + crp->crp_aad_length !=
+ crp->crp_payload_start) {
safestats.st_lenmismatch++;
err = EINVAL;
goto errout;
}
+ if (crp->crp_aad_length != 0)
+ bypass = crp->crp_aad_start;
+ else
+ bypass = crp->crp_payload_start;
+ coffset = crp->crp_aad_length;
+ oplen = crp->crp_payload_start + crp->crp_payload_length;
#ifdef SAFE_DEBUG
if (safe_debug) {
- printf("mac: skip %d, len %d, inject %d\n",
- maccrd->crd_skip, maccrd->crd_len,
- maccrd->crd_inject);
- printf("enc: skip %d, len %d, inject %d\n",
- enccrd->crd_skip, enccrd->crd_len,
- enccrd->crd_inject);
+ printf("AAD: skip %d, len %d, digest %d\n",
+ crp->crp_aad_start, crp->crp_aad_length,
+ crp->crp_digest_start);
+ printf("payload: skip %d, len %d, IV %d\n",
+ crp->crp_payload_start, crp->crp_payload_length,
+ crp->crp_iv_start);
printf("bypass %d coffset %d oplen %d\n",
bypass, coffset, oplen);
}
@@ -1070,13 +1024,8 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
*/
cmd1 |= SAFE_SA_CMD1_MUTABLE;
} else {
- if (enccrd) {
- bypass = enccrd->crd_skip;
- oplen = bypass + enccrd->crd_len;
- } else {
- bypass = maccrd->crd_skip;
- oplen = bypass + maccrd->crd_len;
- }
+ bypass = crp->crp_payload_start;
+ oplen = bypass + crp->crp_payload_length;
coffset = 0;
}
/* XXX verify multiple of 4 when using s/g */
@@ -1092,27 +1041,15 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
err = ENOMEM;
goto errout;
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (bus_dmamap_load_mbuf(sc->sc_srcdmat, re->re_src_map,
- re->re_src_m, safe_op_cb,
- &re->re_src, BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_srcdmat, re->re_src_map);
- re->re_src_map = NULL;
- safestats.st_noload++;
- err = ENOMEM;
- goto errout;
- }
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- if (bus_dmamap_load_uio(sc->sc_srcdmat, re->re_src_map,
- re->re_src_io, safe_op_cb,
- &re->re_src, BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_srcdmat, re->re_src_map);
- re->re_src_map = NULL;
- safestats.st_noload++;
- err = ENOMEM;
- goto errout;
- }
+ if (bus_dmamap_load_crp(sc->sc_srcdmat, re->re_src_map, crp, safe_op_cb,
+ &re->re_src, BUS_DMA_NOWAIT) != 0) {
+ bus_dmamap_destroy(sc->sc_srcdmat, re->re_src_map);
+ re->re_src_map = NULL;
+ safestats.st_noload++;
+ err = ENOMEM;
+ goto errout;
}
+ re->re_src_mapsize = safe_crp_length(crp);
nicealign = safe_dmamap_aligned(&re->re_src);
uniform = safe_dmamap_uniform(&re->re_src);
@@ -1143,211 +1080,175 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
re->re_desc.d_src = re->re_src_segs[0].ds_addr;
}
- if (enccrd == NULL && maccrd != NULL) {
+ if (csp->csp_mode == CSP_MODE_DIGEST) {
/*
* Hash op; no destination needed.
*/
} else {
- if (crp->crp_flags & CRYPTO_F_IOV) {
- if (!nicealign) {
- safestats.st_iovmisaligned++;
- err = EINVAL;
+ if (nicealign && uniform == 1) {
+ /*
+ * Source layout is suitable for direct
+ * sharing of the DMA map and segment list.
+ */
+ re->re_dst = re->re_src;
+ } else if (nicealign && uniform == 2) {
+ /*
+ * The source is properly aligned but requires a
+ * different particle list to handle DMA of the
+ * result. Create a new map and do the load to
+ * create the segment list. The particle
+ * descriptor setup code below will handle the
+ * rest.
+ */
+ if (bus_dmamap_create(sc->sc_dstdmat, BUS_DMA_NOWAIT,
+ &re->re_dst_map)) {
+ safestats.st_nomap++;
+ err = ENOMEM;
goto errout;
}
- if (uniform != 1) {
- /*
- * Source is not suitable for direct use as
- * the destination. Create a new scatter/gather
- * list based on the destination requirements
- * and check if that's ok.
- */
- if (bus_dmamap_create(sc->sc_dstdmat,
- BUS_DMA_NOWAIT, &re->re_dst_map)) {
- safestats.st_nomap++;
- err = ENOMEM;
- goto errout;
- }
- if (bus_dmamap_load_uio(sc->sc_dstdmat,
- re->re_dst_map, re->re_dst_io,
- safe_op_cb, &re->re_dst,
- BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dstdmat,
- re->re_dst_map);
- re->re_dst_map = NULL;
- safestats.st_noload++;
- err = ENOMEM;
- goto errout;
- }
- uniform = safe_dmamap_uniform(&re->re_dst);
- if (!uniform) {
- /*
- * There's no way to handle the DMA
- * requirements with this uio. We
- * could create a separate DMA area for
- * the result and then copy it back,
- * but for now we just bail and return
- * an error. Note that uio requests
- * > SAFE_MAX_DSIZE are handled because
- * the DMA map and segment list for the
- * destination wil result in a
- * destination particle list that does
- * the necessary scatter DMA.
- */
- safestats.st_iovnotuniform++;
- err = EINVAL;
- goto errout;
- }
- } else
- re->re_dst = re->re_src;
- } else if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (nicealign && uniform == 1) {
- /*
- * Source layout is suitable for direct
- * sharing of the DMA map and segment list.
- */
- re->re_dst = re->re_src;
- } else if (nicealign && uniform == 2) {
- /*
- * The source is properly aligned but requires a
- * different particle list to handle DMA of the
- * result. Create a new map and do the load to
- * create the segment list. The particle
- * descriptor setup code below will handle the
- * rest.
- */
- if (bus_dmamap_create(sc->sc_dstdmat,
- BUS_DMA_NOWAIT, &re->re_dst_map)) {
- safestats.st_nomap++;
- err = ENOMEM;
- goto errout;
+ if (bus_dmamap_load_crp(sc->sc_dstdmat, re->re_dst_map,
+ crp, safe_op_cb, &re->re_dst, BUS_DMA_NOWAIT) !=
+ 0) {
+ bus_dmamap_destroy(sc->sc_dstdmat,
+ re->re_dst_map);
+ re->re_dst_map = NULL;
+ safestats.st_noload++;
+ err = ENOMEM;
+ goto errout;
+ }
+ } else if (crp->crp_buf_type == CRYPTO_BUF_MBUF) {
+ int totlen, len;
+ struct mbuf *m, *top, **mp;
+
+ /*
+ * DMA constraints require that we allocate a
+ * new mbuf chain for the destination. We
+ * allocate an entire new set of mbufs of
+ * optimal/required size and then tell the
+ * hardware to copy any bits that are not
+ * created as a byproduct of the operation.
+ */
+ if (!nicealign)
+ safestats.st_unaligned++;
+ if (!uniform)
+ safestats.st_notuniform++;
+ totlen = re->re_src_mapsize;
+ if (crp->crp_mbuf->m_flags & M_PKTHDR) {
+ len = MHLEN;
+ MGETHDR(m, M_NOWAIT, MT_DATA);
+ if (m && !m_dup_pkthdr(m, crp->crp_mbuf,
+ M_NOWAIT)) {
+ m_free(m);
+ m = NULL;
}
- if (bus_dmamap_load_mbuf(sc->sc_dstdmat,
- re->re_dst_map, re->re_dst_m,
- safe_op_cb, &re->re_dst,
- BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dstdmat,
- re->re_dst_map);
- re->re_dst_map = NULL;
- safestats.st_noload++;
- err = ENOMEM;
+ } else {
+ len = MLEN;
+ MGET(m, M_NOWAIT, MT_DATA);
+ }
+ if (m == NULL) {
+ safestats.st_nombuf++;
+ err = sc->sc_nqchip ? ERESTART : ENOMEM;
+ goto errout;
+ }
+ if (totlen >= MINCLSIZE) {
+ if (!(MCLGET(m, M_NOWAIT))) {
+ m_free(m);
+ safestats.st_nomcl++;
+ err = sc->sc_nqchip ?
+ ERESTART : ENOMEM;
goto errout;
}
- } else { /* !(aligned and/or uniform) */
- int totlen, len;
- struct mbuf *m, *top, **mp;
+ len = MCLBYTES;
+ }
+ m->m_len = len;
+ top = NULL;
+ mp = &top;
- /*
- * DMA constraints require that we allocate a
- * new mbuf chain for the destination. We
- * allocate an entire new set of mbufs of
- * optimal/required size and then tell the
- * hardware to copy any bits that are not
- * created as a byproduct of the operation.
- */
- if (!nicealign)
- safestats.st_unaligned++;
- if (!uniform)
- safestats.st_notuniform++;
- totlen = re->re_src_mapsize;
- if (re->re_src_m->m_flags & M_PKTHDR) {
- len = MHLEN;
- MGETHDR(m, M_NOWAIT, MT_DATA);
- if (m && !m_dup_pkthdr(m, re->re_src_m,
- M_NOWAIT)) {
- m_free(m);
- m = NULL;
+ while (totlen > 0) {
+ if (top) {
+ MGET(m, M_NOWAIT, MT_DATA);
+ if (m == NULL) {
+ m_freem(top);
+ safestats.st_nombuf++;
+ err = sc->sc_nqchip ?
+ ERESTART : ENOMEM;
+ goto errout;
}
- } else {
len = MLEN;
- MGET(m, M_NOWAIT, MT_DATA);
- }
- if (m == NULL) {
- safestats.st_nombuf++;
- err = sc->sc_nqchip ? ERESTART : ENOMEM;
- goto errout;
}
- if (totlen >= MINCLSIZE) {
+ if (top && totlen >= MINCLSIZE) {
if (!(MCLGET(m, M_NOWAIT))) {
- m_free(m);
+ *mp = m;
+ m_freem(top);
safestats.st_nomcl++;
err = sc->sc_nqchip ?
- ERESTART : ENOMEM;
+ ERESTART : ENOMEM;
goto errout;
}
len = MCLBYTES;
}
- m->m_len = len;
- top = NULL;
- mp = &top;
-
- while (totlen > 0) {
- if (top) {
- MGET(m, M_NOWAIT, MT_DATA);
- if (m == NULL) {
- m_freem(top);
- safestats.st_nombuf++;
- err = sc->sc_nqchip ?
- ERESTART : ENOMEM;
- goto errout;
- }
- len = MLEN;
- }
- if (top && totlen >= MINCLSIZE) {
- if (!(MCLGET(m, M_NOWAIT))) {
- *mp = m;
- m_freem(top);
- safestats.st_nomcl++;
- err = sc->sc_nqchip ?
- ERESTART : ENOMEM;
- goto errout;
- }
- len = MCLBYTES;
- }
- m->m_len = len = min(totlen, len);
- totlen -= len;
- *mp = m;
- mp = &m->m_next;
- }
- re->re_dst_m = top;
- if (bus_dmamap_create(sc->sc_dstdmat,
- BUS_DMA_NOWAIT, &re->re_dst_map) != 0) {
- safestats.st_nomap++;
- err = ENOMEM;
- goto errout;
- }
- if (bus_dmamap_load_mbuf(sc->sc_dstdmat,
- re->re_dst_map, re->re_dst_m,
- safe_op_cb, &re->re_dst,
- BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dstdmat,
- re->re_dst_map);
- re->re_dst_map = NULL;
- safestats.st_noload++;
- err = ENOMEM;
- goto errout;
- }
- if (re->re_src.mapsize > oplen) {
- /*
- * There's data following what the
- * hardware will copy for us. If this
- * isn't just the ICV (that's going to
- * be written on completion), copy it
- * to the new mbufs
- */
- if (!(maccrd &&
- (re->re_src.mapsize-oplen) == 12 &&
- maccrd->crd_inject == oplen))
- safe_mcopy(re->re_src_m,
- re->re_dst_m,
- oplen);
- else
- safestats.st_noicvcopy++;
- }
+ m->m_len = len = min(totlen, len);
+ totlen -= len;
+ *mp = m;
+ mp = &m->m_next;
+ }
+ re->re_dst_m = top;
+ if (bus_dmamap_create(sc->sc_dstdmat,
+ BUS_DMA_NOWAIT, &re->re_dst_map) != 0) {
+ safestats.st_nomap++;
+ err = ENOMEM;
+ goto errout;
+ }
+ if (bus_dmamap_load_mbuf_sg(sc->sc_dstdmat,
+ re->re_dst_map, top, re->re_dst_segs,
+ &re->re_dst_nsegs, 0) != 0) {
+ bus_dmamap_destroy(sc->sc_dstdmat,
+ re->re_dst_map);
+ re->re_dst_map = NULL;
+ safestats.st_noload++;
+ err = ENOMEM;
+ goto errout;
+ }
+ re->re_dst_mapsize = re->re_src_mapsize;
+ if (re->re_src.mapsize > oplen) {
+ /*
+ * There's data following what the
+ * hardware will copy for us. If this
+ * isn't just the ICV (that's going to
+ * be written on completion), copy it
+ * to the new mbufs
+ */
+ if (!(csp->csp_mode == CSP_MODE_ETA &&
+ (re->re_src.mapsize-oplen) == ses->ses_mlen &&
+ crp->crp_digest_start == oplen))
+ safe_mcopy(crp->crp_mbuf, re->re_dst_m,
+ oplen);
+ else
+ safestats.st_noicvcopy++;
}
} else {
- safestats.st_badflags++;
- err = EINVAL;
- goto errout;
+ if (!nicealign) {
+ safestats.st_iovmisaligned++;
+ err = EINVAL;
+ goto errout;
+ } else {
+ /*
+ * There's no way to handle the DMA
+ * requirements with this uio. We
+ * could create a separate DMA area for
+ * the result and then copy it back,
+ * but for now we just bail and return
+ * an error. Note that uio requests
+ * > SAFE_MAX_DSIZE are handled because
+ * the DMA map and segment list for the
+ * destination wil result in a
+ * destination particle list that does
+ * the necessary scatter DMA.
+ */
+ safestats.st_iovnotuniform++;
+ err = EINVAL;
+ goto errout;
+ }
}
if (re->re_dst.nsegs > 1) {
@@ -1393,7 +1294,7 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
* ready for processing.
*/
re->re_desc.d_csr = SAFE_PE_CSR_READY | SAFE_PE_CSR_SAPCI;
- if (maccrd)
+ if (csp->csp_auth_alg != 0)
re->re_desc.d_csr |= SAFE_PE_CSR_LOADSA | SAFE_PE_CSR_HASHFINAL;
re->re_desc.d_len = oplen
| SAFE_PE_LEN_READY
@@ -1412,7 +1313,7 @@ safe_process(device_t dev, struct cryptop *crp, int hint)
return (0);
errout:
- if ((re->re_dst_m != NULL) && (re->re_src_m != re->re_dst_m))
+ if (re->re_dst_m != NULL)
m_freem(re->re_dst_m);
if (re->re_dst_map != NULL && re->re_dst_map != re->re_src_map) {
@@ -1436,11 +1337,13 @@ errout:
static void
safe_callback(struct safe_softc *sc, struct safe_ringentry *re)
{
+ const struct crypto_session_params *csp;
struct cryptop *crp = (struct cryptop *)re->re_crp;
struct safe_session *ses;
- struct cryptodesc *crd;
+ uint8_t hash[HASH_MAX_LEN];
ses = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
safestats.st_opackets++;
safestats.st_obytes += re->re_dst.mapsize;
@@ -1454,6 +1357,9 @@ safe_callback(struct safe_softc *sc, struct safe_ringentry *re)
safestats.st_peoperr++;
crp->crp_etype = EIO; /* something more meaningful? */
}
+
+ /* XXX: Should crp_mbuf be updated to re->re_dst_m if it is non-NULL? */
+
if (re->re_dst_map != NULL && re->re_dst_map != re->re_src_map) {
bus_dmamap_sync(sc->sc_dstdmat, re->re_dst_map,
BUS_DMASYNC_POSTREAD);
@@ -1464,58 +1370,29 @@ safe_callback(struct safe_softc *sc, struct safe_ringentry *re)
bus_dmamap_unload(sc->sc_srcdmat, re->re_src_map);
bus_dmamap_destroy(sc->sc_srcdmat, re->re_src_map);
- /*
- * If result was written to a differet mbuf chain, swap
- * it in as the return value and reclaim the original.
- */
- if ((crp->crp_flags & CRYPTO_F_IMBUF) && re->re_src_m != re->re_dst_m) {
- m_freem(re->re_src_m);
- crp->crp_buf = (caddr_t)re->re_dst_m;
- }
-
- if (re->re_flags & SAFE_QFLAGS_COPYOUTIV) {
- /* copy out IV for future use */
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- int ivsize;
-
- if (crd->crd_alg == CRYPTO_DES_CBC ||
- crd->crd_alg == CRYPTO_3DES_CBC) {
- ivsize = 2*sizeof(u_int32_t);
- } else if (crd->crd_alg == CRYPTO_AES_CBC) {
- ivsize = 4*sizeof(u_int32_t);
- } else
- continue;
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_skip + crd->crd_len - ivsize, ivsize,
- (caddr_t)ses->ses_iv);
- break;
- }
- }
-
if (re->re_flags & SAFE_QFLAGS_COPYOUTICV) {
- /* copy out ICV result */
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- if (!(crd->crd_alg == CRYPTO_MD5_HMAC ||
- crd->crd_alg == CRYPTO_SHA1_HMAC ||
- crd->crd_alg == CRYPTO_NULL_HMAC))
- continue;
- if (crd->crd_alg == CRYPTO_SHA1_HMAC) {
- /*
- * SHA-1 ICV's are byte-swapped; fix 'em up
- * before copy them to their destination.
- */
- re->re_sastate.sa_saved_indigest[0] =
- bswap32(re->re_sastate.sa_saved_indigest[0]);
- re->re_sastate.sa_saved_indigest[1] =
- bswap32(re->re_sastate.sa_saved_indigest[1]);
- re->re_sastate.sa_saved_indigest[2] =
- bswap32(re->re_sastate.sa_saved_indigest[2]);
- }
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- crd->crd_inject, ses->ses_mlen,
- (caddr_t)re->re_sastate.sa_saved_indigest);
- break;
+ if (csp->csp_auth_alg == CRYPTO_SHA1_HMAC) {
+ /*
+ * SHA-1 ICV's are byte-swapped; fix 'em up
+ * before copying them to their destination.
+ */
+ re->re_sastate.sa_saved_indigest[0] =
+ bswap32(re->re_sastate.sa_saved_indigest[0]);
+ re->re_sastate.sa_saved_indigest[1] =
+ bswap32(re->re_sastate.sa_saved_indigest[1]);
+ re->re_sastate.sa_saved_indigest[2] =
+ bswap32(re->re_sastate.sa_saved_indigest[2]);
}
+
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start,
+ ses->ses_mlen, hash);
+ if (timingsafe_bcmp(re->re_sastate.sa_saved_indigest,
+ hash, ses->ses_mlen) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp, crp->crp_digest_start,
+ ses->ses_mlen, re->re_sastate.sa_saved_indigest);
}
crypto_done(crp);
}
@@ -1921,7 +1798,7 @@ safe_free_entry(struct safe_softc *sc, struct safe_ringentry *re)
/*
* Free header MCR
*/
- if ((re->re_dst_m != NULL) && (re->re_src_m != re->re_dst_m))
+ if (re->re_dst_m != NULL)
m_freem(re->re_dst_m);
crp = (struct cryptop *)re->re_crp;
diff --git a/sys/dev/safe/safevar.h b/sys/dev/safe/safevar.h
index 024d00564562..e7d238669ed6 100644
--- a/sys/dev/safe/safevar.h
+++ b/sys/dev/safe/safevar.h
@@ -75,10 +75,6 @@ struct safe_dma_alloc {
* where each is mapped for DMA.
*/
struct safe_operand {
- union {
- struct mbuf *m;
- struct uio *io;
- } u;
bus_dmamap_t map;
bus_size_t mapsize;
int nsegs;
@@ -109,22 +105,18 @@ struct safe_ringentry {
struct safe_operand re_src; /* source operand */
struct safe_operand re_dst; /* destination operand */
+ struct mbuf *re_dst_m;
int unused;
int re_flags;
-#define SAFE_QFLAGS_COPYOUTIV 0x1 /* copy back on completion */
#define SAFE_QFLAGS_COPYOUTICV 0x2 /* copy back on completion */
};
-#define re_src_m re_src.u.m
-#define re_src_io re_src.u.io
#define re_src_map re_src.map
#define re_src_nsegs re_src.nsegs
#define re_src_segs re_src.segs
#define re_src_mapsize re_src.mapsize
-#define re_dst_m re_dst.u.m
-#define re_dst_io re_dst.u.io
#define re_dst_map re_dst.map
#define re_dst_nsegs re_dst.nsegs
#define re_dst_segs re_dst.segs
@@ -138,7 +130,6 @@ struct safe_session {
u_int32_t ses_mlen; /* hmac length in bytes */
u_int32_t ses_hminner[5]; /* hmac inner state */
u_int32_t ses_hmouter[5]; /* hmac outer state */
- u_int32_t ses_iv[4]; /* DES/3DES/AES iv */
};
struct safe_softc {
@@ -157,6 +148,7 @@ struct safe_softc {
int sc_suspended;
int sc_needwakeup; /* notify crypto layer */
int32_t sc_cid; /* crypto tag */
+ uint32_t sc_devinfo;
struct safe_dma_alloc sc_ringalloc; /* PE ring allocation state */
struct safe_ringentry *sc_ring; /* PE ring */
struct safe_ringentry *sc_ringtop; /* PE ring top */
diff --git a/sys/dev/sec/sec.c b/sys/dev/sec/sec.c
index 76f808757845..3b3ea0018060 100644
--- a/sys/dev/sec/sec.c
+++ b/sys/dev/sec/sec.c
@@ -51,6 +51,7 @@ __FBSDID("$FreeBSD$");
#include <machine/resource.h>
#include <opencrypto/cryptodev.h>
+#include <opencrypto/xform_auth.h>
#include "cryptodev_if.h"
#include <dev/ofw/ofw_bus_subr.h>
@@ -74,7 +75,7 @@ static int sec_init(struct sec_softc *sc);
static int sec_alloc_dma_mem(struct sec_softc *sc,
struct sec_dma_mem *dma_mem, bus_size_t size);
static int sec_desc_map_dma(struct sec_softc *sc,
- struct sec_dma_mem *dma_mem, void *mem, bus_size_t size, int type,
+ struct sec_dma_mem *dma_mem, struct cryptop *crp, bus_size_t size,
struct sec_desc_map_info *sdmi);
static void sec_free_dma_mem(struct sec_dma_mem *dma_mem);
static void sec_enqueue(struct sec_softc *sc);
@@ -82,48 +83,43 @@ static int sec_enqueue_desc(struct sec_softc *sc, struct sec_desc *desc,
int channel);
static int sec_eu_channel(struct sec_softc *sc, int eu);
static int sec_make_pointer(struct sec_softc *sc, struct sec_desc *desc,
- u_int n, void *data, bus_size_t doffset, bus_size_t dsize, int dtype);
+ u_int n, struct cryptop *crp, bus_size_t doffset, bus_size_t dsize);
static int sec_make_pointer_direct(struct sec_softc *sc,
struct sec_desc *desc, u_int n, bus_addr_t data, bus_size_t dsize);
+static int sec_probesession(device_t dev,
+ const struct crypto_session_params *csp);
static int sec_newsession(device_t dev, crypto_session_t cses,
- struct cryptoini *cri);
+ const struct crypto_session_params *csp);
static int sec_process(device_t dev, struct cryptop *crp, int hint);
-static int sec_split_cri(struct cryptoini *cri, struct cryptoini **enc,
- struct cryptoini **mac);
-static int sec_split_crp(struct cryptop *crp, struct cryptodesc **enc,
- struct cryptodesc **mac);
static int sec_build_common_ns_desc(struct sec_softc *sc,
- struct sec_desc *desc, struct sec_session *ses, struct cryptop *crp,
- struct cryptodesc *enc, int buftype);
+ struct sec_desc *desc, const struct crypto_session_params *csp,
+ struct cryptop *crp);
static int sec_build_common_s_desc(struct sec_softc *sc,
- struct sec_desc *desc, struct sec_session *ses, struct cryptop *crp,
- struct cryptodesc *enc, struct cryptodesc *mac, int buftype);
+ struct sec_desc *desc, const struct crypto_session_params *csp,
+ struct cryptop *crp);
static struct sec_desc *sec_find_desc(struct sec_softc *sc, bus_addr_t paddr);
/* AESU */
-static int sec_aesu_newsession(struct sec_softc *sc,
- struct sec_session *ses, struct cryptoini *enc, struct cryptoini *mac);
+static bool sec_aesu_newsession(const struct crypto_session_params *csp);
static int sec_aesu_make_desc(struct sec_softc *sc,
- struct sec_session *ses, struct sec_desc *desc, struct cryptop *crp,
- int buftype);
+ const struct crypto_session_params *csp, struct sec_desc *desc,
+ struct cryptop *crp);
/* DEU */
-static int sec_deu_newsession(struct sec_softc *sc,
- struct sec_session *ses, struct cryptoini *enc, struct cryptoini *mac);
+static bool sec_deu_newsession(const struct crypto_session_params *csp);
static int sec_deu_make_desc(struct sec_softc *sc,
- struct sec_session *ses, struct sec_desc *desc, struct cryptop *crp,
- int buftype);
+ const struct crypto_session_params *csp, struct sec_desc *desc,
+ struct cryptop *crp);
/* MDEU */
-static int sec_mdeu_can_handle(u_int alg);
-static int sec_mdeu_config(struct cryptodesc *crd,
+static bool sec_mdeu_can_handle(u_int alg);
+static int sec_mdeu_config(const struct crypto_session_params *csp,
u_int *eu, u_int *mode, u_int *hashlen);
-static int sec_mdeu_newsession(struct sec_softc *sc,
- struct sec_session *ses, struct cryptoini *enc, struct cryptoini *mac);
+static bool sec_mdeu_newsession(const struct crypto_session_params *csp);
static int sec_mdeu_make_desc(struct sec_softc *sc,
- struct sec_session *ses, struct sec_desc *desc, struct cryptop *crp,
- int buftype);
+ const struct crypto_session_params *csp, struct sec_desc *desc,
+ struct cryptop *crp);
static device_method_t sec_methods[] = {
/* Device interface */
@@ -136,6 +132,7 @@ static device_method_t sec_methods[] = {
DEVMETHOD(device_shutdown, sec_shutdown),
/* Crypto methods */
+ DEVMETHOD(cryptodev_probesession, sec_probesession),
DEVMETHOD(cryptodev_newsession, sec_newsession),
DEVMETHOD(cryptodev_process, sec_process),
@@ -362,24 +359,6 @@ sec_attach(device_t dev)
if (error)
goto fail6;
- /* Register in OCF (AESU) */
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
-
- /* Register in OCF (DEU) */
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
-
- /* Register in OCF (MDEU) */
- crypto_register(sc->sc_cid, CRYPTO_MD5, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA2_256_HMAC, 0, 0);
- if (sc->sc_version >= 3) {
- crypto_register(sc->sc_cid, CRYPTO_SHA2_384_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA2_512_HMAC, 0, 0);
- }
-
return (0);
fail6:
@@ -545,9 +524,12 @@ sec_release_intr(struct sec_softc *sc, struct resource *ires, void *ihand,
static void
sec_primary_intr(void *arg)
{
+ struct sec_session *ses;
struct sec_softc *sc = arg;
struct sec_desc *desc;
+ struct cryptop *crp;
uint64_t isr;
+ uint8_t hash[HASH_MAX_LEN];
int i, wakeup = 0;
SEC_LOCK(sc, controller);
@@ -595,7 +577,26 @@ sec_primary_intr(void *arg)
SEC_DESC_SYNC_POINTERS(desc, BUS_DMASYNC_PREREAD |
BUS_DMASYNC_PREWRITE);
- desc->sd_crp->crp_etype = desc->sd_error;
+ crp = desc->sd_crp;
+ crp->crp_etype = desc->sd_error;
+ if (crp->crp_etype == 0) {
+ ses = crypto_get_driver_session(crp->crp_session);
+ if (ses->ss_mlen != 0) {
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp,
+ crp->crp_digest_start,
+ ses->ss_mlen, hash);
+ if (timingsafe_bcmp(
+ desc->sd_desc->shd_digest,
+ hash, ses->ss_mlen) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp,
+ crp->crp_digest_start,
+ ses->ss_mlen,
+ desc->sd_desc->shd_digest);
+ }
+ }
crypto_done(desc->sd_crp);
SEC_DESC_FREE_POINTERS(desc);
@@ -786,14 +787,6 @@ sec_dma_map_desc_cb(void *arg, bus_dma_segment_t *segs, int nseg,
sdmi->sdmi_lt_last = lt;
}
-static void
-sec_dma_map_desc_cb2(void *arg, bus_dma_segment_t *segs, int nseg,
- bus_size_t size, int error)
-{
-
- sec_dma_map_desc_cb(arg, segs, nseg, error);
-}
-
static int
sec_alloc_dma_mem(struct sec_softc *sc, struct sec_dma_mem *dma_mem,
bus_size_t size)
@@ -851,22 +844,22 @@ err1:
}
static int
-sec_desc_map_dma(struct sec_softc *sc, struct sec_dma_mem *dma_mem, void *mem,
- bus_size_t size, int type, struct sec_desc_map_info *sdmi)
+sec_desc_map_dma(struct sec_softc *sc, struct sec_dma_mem *dma_mem,
+ struct cryptop *crp, bus_size_t size, struct sec_desc_map_info *sdmi)
{
int error;
if (dma_mem->dma_vaddr != NULL)
return (EBUSY);
- switch (type) {
- case SEC_MEMORY:
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_CONTIG:
break;
- case SEC_UIO:
+ case CRYPTO_BUF_UIO:
size = SEC_FREE_LT_CNT(sc) * SEC_MAX_DMA_BLOCK_SIZE;
break;
- case SEC_MBUF:
- size = m_length((struct mbuf*)mem, NULL);
+ case CRYPTO_BUF_MBUF:
+ size = m_length(crp->crp_mbuf, NULL);
break;
default:
return (EINVAL);
@@ -899,20 +892,8 @@ sec_desc_map_dma(struct sec_softc *sc, struct sec_dma_mem *dma_mem, void *mem,
return (error);
}
- switch (type) {
- case SEC_MEMORY:
- error = bus_dmamap_load(dma_mem->dma_tag, dma_mem->dma_map,
- mem, size, sec_dma_map_desc_cb, sdmi, BUS_DMA_NOWAIT);
- break;
- case SEC_UIO:
- error = bus_dmamap_load_uio(dma_mem->dma_tag, dma_mem->dma_map,
- mem, sec_dma_map_desc_cb2, sdmi, BUS_DMA_NOWAIT);
- break;
- case SEC_MBUF:
- error = bus_dmamap_load_mbuf(dma_mem->dma_tag, dma_mem->dma_map,
- mem, sec_dma_map_desc_cb2, sdmi, BUS_DMA_NOWAIT);
- break;
- }
+ error = bus_dmamap_load_crp(dma_mem->dma_tag, dma_mem->dma_map, crp,
+ sec_dma_map_desc_cb, sdmi, BUS_DMA_NOWAIT);
if (error) {
device_printf(sc->sc_dev, "cannot get address of the DMA"
@@ -923,7 +904,7 @@ sec_desc_map_dma(struct sec_softc *sc, struct sec_dma_mem *dma_mem, void *mem,
}
dma_mem->dma_is_map = 1;
- dma_mem->dma_vaddr = mem;
+ dma_mem->dma_vaddr = crp;
return (0);
}
@@ -1130,7 +1111,7 @@ sec_make_pointer_direct(struct sec_softc *sc, struct sec_desc *desc, u_int n,
static int
sec_make_pointer(struct sec_softc *sc, struct sec_desc *desc,
- u_int n, void *data, bus_size_t doffset, bus_size_t dsize, int dtype)
+ u_int n, struct cryptop *crp, bus_size_t doffset, bus_size_t dsize)
{
struct sec_desc_map_info sdmi = { sc, dsize, doffset, NULL, NULL, 0 };
struct sec_hw_desc_ptr *ptr;
@@ -1138,14 +1119,8 @@ sec_make_pointer(struct sec_softc *sc, struct sec_desc *desc,
SEC_LOCK_ASSERT(sc, descriptors);
- /* For flat memory map only requested region */
- if (dtype == SEC_MEMORY) {
- data = (uint8_t*)(data) + doffset;
- sdmi.sdmi_offset = 0;
- }
-
- error = sec_desc_map_dma(sc, &(desc->sd_ptr_dmem[n]), data, dsize,
- dtype, &sdmi);
+ error = sec_desc_map_dma(sc, &(desc->sd_ptr_dmem[n]), crp, dsize,
+ &sdmi);
if (error)
return (error);
@@ -1162,115 +1137,116 @@ sec_make_pointer(struct sec_softc *sc, struct sec_desc *desc,
return (0);
}
-static int
-sec_split_cri(struct cryptoini *cri, struct cryptoini **enc,
- struct cryptoini **mac)
+static bool
+sec_cipher_supported(const struct crypto_session_params *csp)
{
- struct cryptoini *e, *m;
-
- e = cri;
- m = cri->cri_next;
-
- /* We can haldle only two operations */
- if (m && m->cri_next)
- return (EINVAL);
- if (sec_mdeu_can_handle(e->cri_alg)) {
- cri = m;
- m = e;
- e = cri;
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_AES_CBC:
+ /* AESU */
+ if (csp->csp_ivlen != AES_BLOCK_LEN)
+ return (false);
+ break;
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ /* DEU */
+ if (csp->csp_ivlen != DES_BLOCK_LEN)
+ return (false);
+ break;
+ default:
+ return (false);
}
- if (m && !sec_mdeu_can_handle(m->cri_alg))
- return (EINVAL);
+ if (csp->csp_cipher_klen == 0 || csp->csp_cipher_klen > SEC_MAX_KEY_LEN)
+ return (false);
- *enc = e;
- *mac = m;
+ return (true);
+}
- return (0);
+static bool
+sec_auth_supported(struct sec_softc *sc,
+ const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_SHA2_384_HMAC:
+ case CRYPTO_SHA2_512_HMAC:
+ if (sc->sc_version < 3)
+ return (false);
+ /* FALLTHROUGH */
+ case CRYPTO_MD5_HMAC:
+ case CRYPTO_SHA1_HMAC:
+ case CRYPTO_SHA2_256_HMAC:
+ if (csp->csp_auth_klen > SEC_MAX_KEY_LEN)
+ return (false);
+ break;
+ case CRYPTO_MD5:
+ case CRYPTO_SHA1:
+ break;
+ default:
+ return (false);
+ }
+ return (true);
}
static int
-sec_split_crp(struct cryptop *crp, struct cryptodesc **enc,
- struct cryptodesc **mac)
+sec_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct cryptodesc *e, *m, *t;
-
- e = crp->crp_desc;
- m = e->crd_next;
+ struct sec_softc *sc = device_get_softc(dev);
- /* We can haldle only two operations */
- if (m && m->crd_next)
+ if (csp->csp_flags != 0)
return (EINVAL);
-
- if (sec_mdeu_can_handle(e->crd_alg)) {
- t = m;
- m = e;
- e = t;
- }
-
- if (m && !sec_mdeu_can_handle(m->crd_alg))
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!sec_auth_supported(sc, csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!sec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!sec_auth_supported(sc, csp) || !sec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
-
- *enc = e;
- *mac = m;
-
- return (0);
+ }
+ return (CRYPTODEV_PROBE_HARDWARE);
}
static int
-sec_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+sec_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
{
- struct sec_softc *sc = device_get_softc(dev);
struct sec_eu_methods *eu = sec_eus;
- struct cryptoini *enc = NULL;
- struct cryptoini *mac = NULL;
struct sec_session *ses;
- int error = -1;
-
- error = sec_split_cri(cri, &enc, &mac);
- if (error)
- return (error);
-
- /* Check key lengths */
- if (enc && enc->cri_key && (enc->cri_klen / 8) > SEC_MAX_KEY_LEN)
- return (E2BIG);
-
- if (mac && mac->cri_key && (mac->cri_klen / 8) > SEC_MAX_KEY_LEN)
- return (E2BIG);
-
- /* Only SEC 3.0 supports digests larger than 256 bits */
- if (sc->sc_version < 3 && mac && mac->cri_klen > 256)
- return (E2BIG);
ses = crypto_get_driver_session(cses);
/* Find EU for this session */
while (eu->sem_make_desc != NULL) {
- error = eu->sem_newsession(sc, ses, enc, mac);
- if (error >= 0)
+ if (eu->sem_newsession(csp))
break;
-
eu++;
}
-
- /* If not found, return EINVAL */
- if (error < 0)
- return (EINVAL);
+ KASSERT(eu->sem_make_desc != NULL, ("failed to find eu for session"));
/* Save cipher key */
- if (enc && enc->cri_key) {
- ses->ss_klen = enc->cri_klen / 8;
- memcpy(ses->ss_key, enc->cri_key, ses->ss_klen);
- }
+ if (csp->csp_cipher_key != NULL)
+ memcpy(ses->ss_key, csp->csp_cipher_key, csp->csp_cipher_klen);
/* Save digest key */
- if (mac && mac->cri_key) {
- ses->ss_mklen = mac->cri_klen / 8;
- memcpy(ses->ss_mkey, mac->cri_key, ses->ss_mklen);
+ if (csp->csp_auth_key != NULL)
+ memcpy(ses->ss_mkey, csp->csp_auth_key, csp->csp_auth_klen);
+
+ if (csp->csp_auth_alg != 0) {
+ if (csp->csp_auth_mlen == 0)
+ ses->ss_mlen = crypto_auth_hash(csp)->hashsize;
+ else
+ ses->ss_mlen = csp->csp_auth_mlen;
}
- ses->ss_eu = eu;
return (0);
}
@@ -1279,11 +1255,12 @@ sec_process(device_t dev, struct cryptop *crp, int hint)
{
struct sec_softc *sc = device_get_softc(dev);
struct sec_desc *desc = NULL;
- struct cryptodesc *mac, *enc;
+ const struct crypto_session_params *csp;
struct sec_session *ses;
- int buftype, error = 0;
+ int error = 0;
ses = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
/* Check for input length */
if (crp->crp_ilen > SEC_MAX_DMA_BLOCK_SIZE) {
@@ -1292,13 +1269,6 @@ sec_process(device_t dev, struct cryptop *crp, int hint)
return (0);
}
- /* Get descriptors */
- if (sec_split_crp(crp, &enc, &mac)) {
- crp->crp_etype = EINVAL;
- crypto_done(crp);
- return (0);
- }
-
SEC_LOCK(sc, descriptors);
SEC_DESC_SYNC(sc, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
@@ -1315,56 +1285,29 @@ sec_process(device_t dev, struct cryptop *crp, int hint)
desc->sd_error = 0;
desc->sd_crp = crp;
- if (crp->crp_flags & CRYPTO_F_IOV)
- buftype = SEC_UIO;
- else if (crp->crp_flags & CRYPTO_F_IMBUF)
- buftype = SEC_MBUF;
- else
- buftype = SEC_MEMORY;
-
- if (enc && enc->crd_flags & CRD_F_ENCRYPT) {
- if (enc->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(desc->sd_desc->shd_iv, enc->crd_iv,
- ses->ss_ivlen);
- else
- arc4rand(desc->sd_desc->shd_iv, ses->ss_ivlen, 0);
-
- if ((enc->crd_flags & CRD_F_IV_PRESENT) == 0)
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enc->crd_inject, ses->ss_ivlen,
+ if (csp->csp_cipher_alg != 0) {
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(desc->sd_desc->shd_iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen,
desc->sd_desc->shd_iv);
- } else if (enc) {
- if (enc->crd_flags & CRD_F_IV_EXPLICIT)
- memcpy(desc->sd_desc->shd_iv, enc->crd_iv,
- ses->ss_ivlen);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(desc->sd_desc->shd_iv, crp->crp_iv,
+ csp->csp_ivlen);
else
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enc->crd_inject, ses->ss_ivlen,
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen,
desc->sd_desc->shd_iv);
}
- if (enc && enc->crd_flags & CRD_F_KEY_EXPLICIT) {
- if ((enc->crd_klen / 8) <= SEC_MAX_KEY_LEN) {
- ses->ss_klen = enc->crd_klen / 8;
- memcpy(ses->ss_key, enc->crd_key, ses->ss_klen);
- } else
- error = E2BIG;
- }
+ if (crp->crp_cipher_key != NULL)
+ memcpy(ses->ss_key, crp->crp_cipher_key, csp->csp_cipher_klen);
- if (!error && mac && mac->crd_flags & CRD_F_KEY_EXPLICIT) {
- if ((mac->crd_klen / 8) <= SEC_MAX_KEY_LEN) {
- ses->ss_mklen = mac->crd_klen / 8;
- memcpy(ses->ss_mkey, mac->crd_key, ses->ss_mklen);
- } else
- error = E2BIG;
- }
+ if (crp->crp_auth_key != NULL)
+ memcpy(ses->ss_mkey, crp->crp_auth_key, csp->csp_auth_klen);
- if (!error) {
- memcpy(desc->sd_desc->shd_key, ses->ss_key, ses->ss_klen);
- memcpy(desc->sd_desc->shd_mkey, ses->ss_mkey, ses->ss_mklen);
+ memcpy(desc->sd_desc->shd_key, ses->ss_key, csp->csp_cipher_klen);
+ memcpy(desc->sd_desc->shd_mkey, ses->ss_mkey, csp->csp_auth_klen);
- error = ses->ss_eu->sem_make_desc(sc, ses, desc, crp, buftype);
- }
+ error = ses->ss_eu->sem_make_desc(sc, csp, desc, crp);
if (error) {
SEC_DESC_FREE_POINTERS(desc);
@@ -1400,8 +1343,7 @@ sec_process(device_t dev, struct cryptop *crp, int hint)
static int
sec_build_common_ns_desc(struct sec_softc *sc, struct sec_desc *desc,
- struct sec_session *ses, struct cryptop *crp, struct cryptodesc *enc,
- int buftype)
+ const struct crypto_session_params *csp, struct cryptop *crp)
{
struct sec_hw_desc *hd = desc->sd_desc;
int error;
@@ -1417,25 +1359,25 @@ sec_build_common_ns_desc(struct sec_softc *sc, struct sec_desc *desc,
/* Pointer 1: IV IN */
error = sec_make_pointer_direct(sc, desc, 1, desc->sd_desc_paddr +
- offsetof(struct sec_hw_desc, shd_iv), ses->ss_ivlen);
+ offsetof(struct sec_hw_desc, shd_iv), csp->csp_ivlen);
if (error)
return (error);
/* Pointer 2: Cipher Key */
error = sec_make_pointer_direct(sc, desc, 2, desc->sd_desc_paddr +
- offsetof(struct sec_hw_desc, shd_key), ses->ss_klen);
+ offsetof(struct sec_hw_desc, shd_key), csp->csp_cipher_klen);
if (error)
return (error);
/* Pointer 3: Data IN */
- error = sec_make_pointer(sc, desc, 3, crp->crp_buf, enc->crd_skip,
- enc->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 3, crp, crp->crp_payload_start,
+ crp->crp_payload_length);
if (error)
return (error);
/* Pointer 4: Data OUT */
- error = sec_make_pointer(sc, desc, 4, crp->crp_buf, enc->crd_skip,
- enc->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 4, crp, crp->crp_payload_start,
+ crp->crp_payload_length);
if (error)
return (error);
@@ -1452,20 +1394,13 @@ sec_build_common_ns_desc(struct sec_softc *sc, struct sec_desc *desc,
static int
sec_build_common_s_desc(struct sec_softc *sc, struct sec_desc *desc,
- struct sec_session *ses, struct cryptop *crp, struct cryptodesc *enc,
- struct cryptodesc *mac, int buftype)
+ const struct crypto_session_params *csp, struct cryptop *crp)
{
struct sec_hw_desc *hd = desc->sd_desc;
u_int eu, mode, hashlen;
int error;
- if (mac->crd_len < enc->crd_len)
- return (EINVAL);
-
- if (mac->crd_skip + mac->crd_len != enc->crd_skip + enc->crd_len)
- return (EINVAL);
-
- error = sec_mdeu_config(mac, &eu, &mode, &hashlen);
+ error = sec_mdeu_config(csp, &eu, &mode, &hashlen);
if (error)
return (error);
@@ -1475,144 +1410,107 @@ sec_build_common_s_desc(struct sec_softc *sc, struct sec_desc *desc,
/* Pointer 0: HMAC Key */
error = sec_make_pointer_direct(sc, desc, 0, desc->sd_desc_paddr +
- offsetof(struct sec_hw_desc, shd_mkey), ses->ss_mklen);
+ offsetof(struct sec_hw_desc, shd_mkey), csp->csp_auth_klen);
if (error)
return (error);
/* Pointer 1: HMAC-Only Data IN */
- error = sec_make_pointer(sc, desc, 1, crp->crp_buf, mac->crd_skip,
- mac->crd_len - enc->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 1, crp, crp->crp_aad_start,
+ crp->crp_aad_length);
if (error)
return (error);
/* Pointer 2: Cipher Key */
error = sec_make_pointer_direct(sc, desc, 2, desc->sd_desc_paddr +
- offsetof(struct sec_hw_desc, shd_key), ses->ss_klen);
+ offsetof(struct sec_hw_desc, shd_key), csp->csp_cipher_klen);
if (error)
return (error);
/* Pointer 3: IV IN */
error = sec_make_pointer_direct(sc, desc, 3, desc->sd_desc_paddr +
- offsetof(struct sec_hw_desc, shd_iv), ses->ss_ivlen);
+ offsetof(struct sec_hw_desc, shd_iv), csp->csp_ivlen);
if (error)
return (error);
/* Pointer 4: Data IN */
- error = sec_make_pointer(sc, desc, 4, crp->crp_buf, enc->crd_skip,
- enc->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 4, crp, crp->crp_payload_start,
+ crp->crp_payload_length);
if (error)
return (error);
/* Pointer 5: Data OUT */
- error = sec_make_pointer(sc, desc, 5, crp->crp_buf, enc->crd_skip,
- enc->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 5, crp, crp->crp_payload_start,
+ crp->crp_payload_length);
if (error)
return (error);
/* Pointer 6: HMAC OUT */
- error = sec_make_pointer(sc, desc, 6, crp->crp_buf, mac->crd_inject,
- hashlen, buftype);
+ error = sec_make_pointer_direct(sc, desc, 6, desc->sd_desc_paddr +
+ offsetof(struct sec_hw_desc, shd_digest), hashlen);
return (error);
}
/* AESU */
-static int
-sec_aesu_newsession(struct sec_softc *sc, struct sec_session *ses,
- struct cryptoini *enc, struct cryptoini *mac)
+static bool
+sec_aesu_newsession(const struct crypto_session_params *csp)
{
- if (enc == NULL)
- return (-1);
-
- if (enc->cri_alg != CRYPTO_AES_CBC)
- return (-1);
-
- ses->ss_ivlen = AES_BLOCK_LEN;
-
- return (0);
+ return (csp->csp_cipher_alg == CRYPTO_AES_CBC);
}
static int
-sec_aesu_make_desc(struct sec_softc *sc, struct sec_session *ses,
- struct sec_desc *desc, struct cryptop *crp, int buftype)
+sec_aesu_make_desc(struct sec_softc *sc,
+ const struct crypto_session_params *csp, struct sec_desc *desc,
+ struct cryptop *crp)
{
struct sec_hw_desc *hd = desc->sd_desc;
- struct cryptodesc *enc, *mac;
int error;
- error = sec_split_crp(crp, &enc, &mac);
- if (error)
- return (error);
-
- if (!enc)
- return (EINVAL);
-
hd->shd_eu_sel0 = SEC_EU_AESU;
hd->shd_mode0 = SEC_AESU_MODE_CBC;
- if (enc->crd_alg != CRYPTO_AES_CBC)
- return (EINVAL);
-
- if (enc->crd_flags & CRD_F_ENCRYPT) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
hd->shd_mode0 |= SEC_AESU_MODE_ED;
hd->shd_dir = 0;
} else
hd->shd_dir = 1;
- if (mac)
- error = sec_build_common_s_desc(sc, desc, ses, crp, enc, mac,
- buftype);
+ if (csp->csp_mode == CSP_MODE_ETA)
+ error = sec_build_common_s_desc(sc, desc, csp, crp);
else
- error = sec_build_common_ns_desc(sc, desc, ses, crp, enc,
- buftype);
+ error = sec_build_common_ns_desc(sc, desc, csp, crp);
return (error);
}
/* DEU */
-static int
-sec_deu_newsession(struct sec_softc *sc, struct sec_session *ses,
- struct cryptoini *enc, struct cryptoini *mac)
+static bool
+sec_deu_newsession(const struct crypto_session_params *csp)
{
- if (enc == NULL)
- return (-1);
-
- switch (enc->cri_alg) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_DES_CBC:
case CRYPTO_3DES_CBC:
- break;
+ return (true);
default:
- return (-1);
+ return (false);
}
-
- ses->ss_ivlen = DES_BLOCK_LEN;
-
- return (0);
}
static int
-sec_deu_make_desc(struct sec_softc *sc, struct sec_session *ses,
- struct sec_desc *desc, struct cryptop *crp, int buftype)
+sec_deu_make_desc(struct sec_softc *sc, const struct crypto_session_params *csp,
+ struct sec_desc *desc, struct cryptop *crp)
{
struct sec_hw_desc *hd = desc->sd_desc;
- struct cryptodesc *enc, *mac;
int error;
- error = sec_split_crp(crp, &enc, &mac);
- if (error)
- return (error);
-
- if (!enc)
- return (EINVAL);
-
hd->shd_eu_sel0 = SEC_EU_DEU;
hd->shd_mode0 = SEC_DEU_MODE_CBC;
- switch (enc->crd_alg) {
+ switch (csp->csp_cipher_alg) {
case CRYPTO_3DES_CBC:
hd->shd_mode0 |= SEC_DEU_MODE_TS;
break;
@@ -1622,25 +1520,23 @@ sec_deu_make_desc(struct sec_softc *sc, struct sec_session *ses,
return (EINVAL);
}
- if (enc->crd_flags & CRD_F_ENCRYPT) {
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
hd->shd_mode0 |= SEC_DEU_MODE_ED;
hd->shd_dir = 0;
} else
hd->shd_dir = 1;
- if (mac)
- error = sec_build_common_s_desc(sc, desc, ses, crp, enc, mac,
- buftype);
+ if (csp->csp_mode == CSP_MODE_ETA)
+ error = sec_build_common_s_desc(sc, desc, csp, crp);
else
- error = sec_build_common_ns_desc(sc, desc, ses, crp, enc,
- buftype);
+ error = sec_build_common_ns_desc(sc, desc, csp, crp);
return (error);
}
/* MDEU */
-static int
+static bool
sec_mdeu_can_handle(u_int alg)
{
switch (alg) {
@@ -1651,20 +1547,21 @@ sec_mdeu_can_handle(u_int alg)
case CRYPTO_SHA2_256_HMAC:
case CRYPTO_SHA2_384_HMAC:
case CRYPTO_SHA2_512_HMAC:
- return (1);
+ return (true);
default:
- return (0);
+ return (false);
}
}
static int
-sec_mdeu_config(struct cryptodesc *crd, u_int *eu, u_int *mode, u_int *hashlen)
+sec_mdeu_config(const struct crypto_session_params *csp, u_int *eu, u_int *mode,
+ u_int *hashlen)
{
*mode = SEC_MDEU_MODE_PD | SEC_MDEU_MODE_INIT;
*eu = SEC_EU_NONE;
- switch (crd->crd_alg) {
+ switch (csp->csp_auth_alg) {
case CRYPTO_MD5_HMAC:
*mode |= SEC_MDEU_MODE_HMAC;
/* FALLTHROUGH */
@@ -1703,34 +1600,23 @@ sec_mdeu_config(struct cryptodesc *crd, u_int *eu, u_int *mode, u_int *hashlen)
return (0);
}
-static int
-sec_mdeu_newsession(struct sec_softc *sc, struct sec_session *ses,
- struct cryptoini *enc, struct cryptoini *mac)
+static bool
+sec_mdeu_newsession(const struct crypto_session_params *csp)
{
- if (mac && sec_mdeu_can_handle(mac->cri_alg))
- return (0);
-
- return (-1);
+ return (sec_mdeu_can_handle(csp->csp_auth_alg));
}
static int
-sec_mdeu_make_desc(struct sec_softc *sc, struct sec_session *ses,
- struct sec_desc *desc, struct cryptop *crp, int buftype)
+sec_mdeu_make_desc(struct sec_softc *sc,
+ const struct crypto_session_params *csp,
+ struct sec_desc *desc, struct cryptop *crp)
{
- struct cryptodesc *enc, *mac;
struct sec_hw_desc *hd = desc->sd_desc;
u_int eu, mode, hashlen;
int error;
- error = sec_split_crp(crp, &enc, &mac);
- if (error)
- return (error);
-
- if (enc)
- return (EINVAL);
-
- error = sec_mdeu_config(mac, &eu, &mode, &hashlen);
+ error = sec_mdeu_config(csp, &eu, &mode, &hashlen);
if (error)
return (error);
@@ -1754,7 +1640,7 @@ sec_mdeu_make_desc(struct sec_softc *sc, struct sec_session *ses,
if (hd->shd_mode0 & SEC_MDEU_MODE_HMAC)
error = sec_make_pointer_direct(sc, desc, 2,
desc->sd_desc_paddr + offsetof(struct sec_hw_desc,
- shd_mkey), ses->ss_mklen);
+ shd_mkey), csp->csp_auth_klen);
else
error = sec_make_pointer_direct(sc, desc, 2, 0, 0);
@@ -1762,8 +1648,8 @@ sec_mdeu_make_desc(struct sec_softc *sc, struct sec_session *ses,
return (error);
/* Pointer 3: Input Data */
- error = sec_make_pointer(sc, desc, 3, crp->crp_buf, mac->crd_skip,
- mac->crd_len, buftype);
+ error = sec_make_pointer(sc, desc, 3, crp, crp->crp_payload_start,
+ crp->crp_payload_length);
if (error)
return (error);
@@ -1773,8 +1659,8 @@ sec_mdeu_make_desc(struct sec_softc *sc, struct sec_session *ses,
return (error);
/* Pointer 5: Hash out */
- error = sec_make_pointer(sc, desc, 5, crp->crp_buf,
- mac->crd_inject, hashlen, buftype);
+ error = sec_make_pointer_direct(sc, desc, 5, desc->sd_desc_paddr +
+ offsetof(struct sec_hw_desc, shd_digest), hashlen);
if (error)
return (error);
diff --git a/sys/dev/sec/sec.h b/sys/dev/sec/sec.h
index 05b15039ad64..6ad482316a54 100644
--- a/sys/dev/sec/sec.h
+++ b/sys/dev/sec/sec.h
@@ -98,6 +98,7 @@ struct sec_hw_desc {
uint8_t shd_iv[SEC_MAX_IV_LEN];
uint8_t shd_key[SEC_MAX_KEY_LEN];
uint8_t shd_mkey[SEC_MAX_KEY_LEN];
+ uint8_t shd_digest[HASH_MAX_LEN];
} __packed__;
#define shd_eu_sel0 shd_control.request.eu_sel0
@@ -144,21 +145,17 @@ struct sec_lt {
};
struct sec_eu_methods {
- int (*sem_newsession)(struct sec_softc *sc,
- struct sec_session *ses, struct cryptoini *enc,
- struct cryptoini *mac);
+ bool (*sem_newsession)(const struct crypto_session_params *csp);
int (*sem_make_desc)(struct sec_softc *sc,
- struct sec_session *ses, struct sec_desc *desc,
- struct cryptop *crp, int buftype);
+ const struct crypto_session_params *csp, struct sec_desc *desc,
+ struct cryptop *crp);
};
struct sec_session {
struct sec_eu_methods *ss_eu;
uint8_t ss_key[SEC_MAX_KEY_LEN];
uint8_t ss_mkey[SEC_MAX_KEY_LEN];
- u_int ss_klen;
- u_int ss_mklen;
- u_int ss_ivlen;
+ int ss_mlen;
};
struct sec_desc_map_info {
@@ -319,11 +316,6 @@ struct sec_softc {
(((sc)->sc_lt_free_cnt - (sc)->sc_lt_alloc_cnt - 1) \
& (SEC_LT_ENTRIES - 1))
-/* DMA Maping defines */
-#define SEC_MEMORY 0
-#define SEC_UIO 1
-#define SEC_MBUF 2
-
/* Size of SEC registers area */
#define SEC_IO_SIZE 0x10000
diff --git a/sys/dev/ubsec/ubsec.c b/sys/dev/ubsec/ubsec.c
index 19f46458ac3b..e4b324e05f86 100644
--- a/sys/dev/ubsec/ubsec.c
+++ b/sys/dev/ubsec/ubsec.c
@@ -61,6 +61,7 @@ __FBSDID("$FreeBSD$");
#include <sys/mutex.h>
#include <sys/sysctl.h>
#include <sys/endian.h>
+#include <sys/uio.h>
#include <vm/vm.h>
#include <vm/pmap.h>
@@ -70,10 +71,8 @@ __FBSDID("$FreeBSD$");
#include <sys/bus.h>
#include <sys/rman.h>
-#include <crypto/sha1.h>
#include <opencrypto/cryptodev.h>
-#include <opencrypto/cryptosoft.h>
-#include <sys/md5.h>
+#include <opencrypto/xform_auth.h>
#include <sys/random.h>
#include <sys/kobj.h>
@@ -111,7 +110,9 @@ static int ubsec_suspend(device_t);
static int ubsec_resume(device_t);
static int ubsec_shutdown(device_t);
-static int ubsec_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int ubsec_probesession(device_t, const struct crypto_session_params *);
+static int ubsec_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static int ubsec_process(device_t, struct cryptop *, int);
static int ubsec_kprocess(device_t, struct cryptkop *, int);
@@ -125,6 +126,7 @@ static device_method_t ubsec_methods[] = {
DEVMETHOD(device_shutdown, ubsec_shutdown),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, ubsec_probesession),
DEVMETHOD(cryptodev_newsession, ubsec_newsession),
DEVMETHOD(cryptodev_process, ubsec_process),
DEVMETHOD(cryptodev_kprocess, ubsec_kprocess),
@@ -348,13 +350,6 @@ ubsec_attach(device_t dev)
goto bad2;
}
- sc->sc_cid = crypto_get_driverid(dev, sizeof(struct ubsec_session),
- CRYPTOCAP_F_HARDWARE);
- if (sc->sc_cid < 0) {
- device_printf(dev, "could not get crypto driver id\n");
- goto bad3;
- }
-
/*
* Setup DMA descriptor area.
*/
@@ -370,7 +365,7 @@ ubsec_attach(device_t dev)
NULL, NULL, /* lockfunc, lockarg */
&sc->sc_dmat)) {
device_printf(dev, "cannot allocate DMA tag\n");
- goto bad4;
+ goto bad3;
}
SIMPLEQ_INIT(&sc->sc_freequeue);
dmap = sc->sc_dmaa;
@@ -404,11 +399,6 @@ ubsec_attach(device_t dev)
device_printf(sc->sc_dev, "%s\n", ubsec_partname(sc));
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
-
/*
* Reset Broadcom chip
*/
@@ -424,6 +414,13 @@ ubsec_attach(device_t dev)
*/
ubsec_init_board(sc);
+ sc->sc_cid = crypto_get_driverid(dev, sizeof(struct ubsec_session),
+ CRYPTOCAP_F_HARDWARE);
+ if (sc->sc_cid < 0) {
+ device_printf(dev, "could not get crypto driver id\n");
+ goto bad4;
+ }
+
#ifndef UBSEC_NO_RNG
if (sc->sc_flags & UBS_FLAGS_RNG) {
sc->sc_statmask |= BS_STAT_MCR2_DONE;
@@ -477,7 +474,15 @@ skip_rng:
}
return (0);
bad4:
- crypto_unregister_all(sc->sc_cid);
+ while (!SIMPLEQ_EMPTY(&sc->sc_freequeue)) {
+ struct ubsec_q *q;
+
+ q = SIMPLEQ_FIRST(&sc->sc_freequeue);
+ SIMPLEQ_REMOVE_HEAD(&sc->sc_freequeue, q_next);
+ ubsec_dma_free(sc, &q->q_dma->d_alloc);
+ free(q, M_DEVBUF);
+ }
+ bus_dma_tag_destroy(sc->sc_dmat);
bad3:
bus_teardown_intr(dev, sc->sc_irq, sc->sc_ih);
bad2:
@@ -498,13 +503,14 @@ ubsec_detach(device_t dev)
/* XXX wait/abort active ops */
+ crypto_unregister_all(sc->sc_cid);
+
/* disable interrupts */
WRITE_REG(sc, BS_CTRL, READ_REG(sc, BS_CTRL) &~
(BS_CTRL_MCR2INT | BS_CTRL_MCR1INT | BS_CTRL_DMAERR));
callout_stop(&sc->sc_rngto);
-
- crypto_unregister_all(sc->sc_cid);
+ bus_teardown_intr(dev, sc->sc_irq, sc->sc_ih);
#ifdef UBSEC_RNDTEST
if (sc->sc_rndtest)
@@ -531,7 +537,6 @@ ubsec_detach(device_t dev)
mtx_destroy(&sc->sc_mcr2lock);
bus_generic_detach(dev);
- bus_teardown_intr(dev, sc->sc_irq, sc->sc_ih);
bus_release_resource(dev, SYS_RES_IRQ, 0, sc->sc_irq);
bus_dma_tag_destroy(sc->sc_dmat);
@@ -826,7 +831,7 @@ feed1:
}
static void
-ubsec_setup_enckey(struct ubsec_session *ses, int algo, caddr_t key)
+ubsec_setup_enckey(struct ubsec_session *ses, int algo, const void *key)
{
/* Go ahead and compute key in ubsec's byte order */
@@ -846,112 +851,134 @@ ubsec_setup_enckey(struct ubsec_session *ses, int algo, caddr_t key)
}
static void
-ubsec_setup_mackey(struct ubsec_session *ses, int algo, caddr_t key, int klen)
+ubsec_setup_mackey(struct ubsec_session *ses, int algo, const char *key,
+ int klen)
{
MD5_CTX md5ctx;
SHA1_CTX sha1ctx;
- int i;
-
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_IPAD_VAL;
if (algo == CRYPTO_MD5_HMAC) {
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, key, klen);
- MD5Update(&md5ctx, hmac_ipad_buffer, MD5_BLOCK_LEN - klen);
+ hmac_init_ipad(&auth_hash_hmac_md5, key, klen, &md5ctx);
bcopy(md5ctx.state, ses->ses_hminner, sizeof(md5ctx.state));
+
+ hmac_init_opad(&auth_hash_hmac_md5, key, klen, &md5ctx);
+ bcopy(md5ctx.state, ses->ses_hmouter, sizeof(md5ctx.state));
+
+ explicit_bzero(&md5ctx, sizeof(md5ctx));
} else {
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, key, klen);
- SHA1Update(&sha1ctx, hmac_ipad_buffer,
- SHA1_BLOCK_LEN - klen);
+ hmac_init_ipad(&auth_hash_hmac_sha1, key, klen, &sha1ctx);
bcopy(sha1ctx.h.b32, ses->ses_hminner, sizeof(sha1ctx.h.b32));
+
+ hmac_init_opad(&auth_hash_hmac_sha1, key, klen, &sha1ctx);
+ bcopy(sha1ctx.h.b32, ses->ses_hmouter, sizeof(sha1ctx.h.b32));
+
+ explicit_bzero(&sha1ctx, sizeof(sha1ctx));
}
+}
- for (i = 0; i < klen; i++)
- key[i] ^= (HMAC_IPAD_VAL ^ HMAC_OPAD_VAL);
+static bool
+ubsec_auth_supported(const struct crypto_session_params *csp)
+{
- if (algo == CRYPTO_MD5_HMAC) {
- MD5Init(&md5ctx);
- MD5Update(&md5ctx, key, klen);
- MD5Update(&md5ctx, hmac_opad_buffer, MD5_BLOCK_LEN - klen);
- bcopy(md5ctx.state, ses->ses_hmouter, sizeof(md5ctx.state));
- } else {
- SHA1Init(&sha1ctx);
- SHA1Update(&sha1ctx, key, klen);
- SHA1Update(&sha1ctx, hmac_opad_buffer,
- SHA1_BLOCK_LEN - klen);
- bcopy(sha1ctx.h.b32, ses->ses_hmouter, sizeof(sha1ctx.h.b32));
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
+ case CRYPTO_SHA1_HMAC:
+ return (true);
+ default:
+ return (false);
}
+}
+
+static bool
+ubsec_cipher_supported(const struct crypto_session_params *csp)
+{
- for (i = 0; i < klen; i++)
- key[i] ^= HMAC_OPAD_VAL;
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ return (csp->csp_ivlen == 8);
+ default:
+ return (false);
+ }
}
-/*
- * Allocate a new 'session' and return an encoded session id. 'sidp'
- * contains our registration id, and should contain an encoded session
- * id on successful allocation.
- */
static int
-ubsec_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+ubsec_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct ubsec_softc *sc = device_get_softc(dev);
- struct cryptoini *c, *encini = NULL, *macini = NULL;
- struct ubsec_session *ses = NULL;
- if (cri == NULL || sc == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
-
- for (c = cri; c != NULL; c = c->cri_next) {
- if (c->cri_alg == CRYPTO_MD5_HMAC ||
- c->cri_alg == CRYPTO_SHA1_HMAC) {
- if (macini)
- return (EINVAL);
- macini = c;
- } else if (c->cri_alg == CRYPTO_DES_CBC ||
- c->cri_alg == CRYPTO_3DES_CBC) {
- if (encini)
- return (EINVAL);
- encini = c;
- } else
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!ubsec_auth_supported(csp))
return (EINVAL);
- }
- if (encini == NULL && macini == NULL)
+ break;
+ case CSP_MODE_CIPHER:
+ if (!ubsec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!ubsec_auth_supported(csp) ||
+ !ubsec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ default:
return (EINVAL);
+ }
+
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+/*
+ * Allocate a new 'session'.
+ */
+static int
+ubsec_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct ubsec_session *ses;
ses = crypto_get_driver_session(cses);
- if (encini) {
- /* get an IV, network byte order */
- /* XXX may read fewer than requested */
- read_random(ses->ses_iv, sizeof(ses->ses_iv));
-
- if (encini->cri_key != NULL) {
- ubsec_setup_enckey(ses, encini->cri_alg,
- encini->cri_key);
- }
- }
+ if (csp->csp_cipher_alg != 0 && csp->csp_cipher_key != NULL)
+ ubsec_setup_enckey(ses, csp->csp_cipher_alg,
+ csp->csp_cipher_key);
- if (macini) {
- ses->ses_mlen = macini->cri_mlen;
+ if (csp->csp_auth_alg != 0) {
+ ses->ses_mlen = csp->csp_auth_mlen;
if (ses->ses_mlen == 0) {
- if (macini->cri_alg == CRYPTO_MD5_HMAC)
+ if (csp->csp_auth_alg == CRYPTO_MD5_HMAC)
ses->ses_mlen = MD5_HASH_LEN;
else
ses->ses_mlen = SHA1_HASH_LEN;
}
- if (macini->cri_key != NULL) {
- ubsec_setup_mackey(ses, macini->cri_alg,
- macini->cri_key, macini->cri_klen / 8);
+ if (csp->csp_auth_key != NULL) {
+ ubsec_setup_mackey(ses, csp->csp_auth_alg,
+ csp->csp_auth_key, csp->csp_auth_klen);
}
}
return (0);
}
+static bus_size_t
+ubsec_crp_length(struct cryptop *crp)
+{
+
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ return (crp->crp_mbuf->m_pkthdr.len);
+ case CRYPTO_BUF_UIO:
+ return (crp->crp_uio->uio_resid);
+ case CRYPTO_BUF_CONTIG:
+ return (crp->crp_ilen);
+ default:
+ panic("bad crp buffer type");
+ }
+}
+
static void
-ubsec_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, int error)
+ubsec_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, int error)
{
struct ubsec_operand *op = arg;
@@ -959,12 +986,11 @@ ubsec_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, in
("Too many DMA segments returned when mapping operand"));
#ifdef UBSEC_DEBUG
if (ubsec_debug)
- printf("ubsec_op_cb: mapsize %u nsegs %d error %d\n",
- (u_int) mapsize, nsegs, error);
+ printf("ubsec_op_cb: nsegs %d error %d\n",
+ nsegs, error);
#endif
if (error != 0)
return;
- op->mapsize = mapsize;
op->nsegs = nsegs;
bcopy(seg, op->segs, nsegs * sizeof (seg[0]));
}
@@ -972,22 +998,17 @@ ubsec_op_cb(void *arg, bus_dma_segment_t *seg, int nsegs, bus_size_t mapsize, in
static int
ubsec_process(device_t dev, struct cryptop *crp, int hint)
{
+ const struct crypto_session_params *csp;
struct ubsec_softc *sc = device_get_softc(dev);
struct ubsec_q *q = NULL;
int err = 0, i, j, nicealign;
- struct cryptodesc *crd1, *crd2, *maccrd, *enccrd;
- int encoffset = 0, macoffset = 0, cpskip, cpoffset;
+ int cpskip, cpoffset;
int sskip, dskip, stheend, dtheend;
int16_t coffset;
struct ubsec_session *ses;
struct ubsec_pktctx ctx;
struct ubsec_dma *dmap = NULL;
- if (crp == NULL || crp->crp_callback == NULL || sc == NULL) {
- ubsecstats.hst_invalid++;
- return (EINVAL);
- }
-
mtx_lock(&sc->sc_freeqlock);
if (SIMPLEQ_EMPTY(&sc->sc_freequeue)) {
ubsecstats.hst_queuefull++;
@@ -1006,103 +1027,34 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
q->q_dma = dmap;
ses = crypto_get_driver_session(crp->crp_session);
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- q->q_src_m = (struct mbuf *)crp->crp_buf;
- q->q_dst_m = (struct mbuf *)crp->crp_buf;
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- q->q_src_io = (struct uio *)crp->crp_buf;
- q->q_dst_io = (struct uio *)crp->crp_buf;
- } else {
- ubsecstats.hst_badflags++;
- err = EINVAL;
- goto errout; /* XXX we don't handle contiguous blocks! */
- }
-
bzero(&dmap->d_dma->d_mcr, sizeof(struct ubsec_mcr));
dmap->d_dma->d_mcr.mcr_pkts = htole16(1);
dmap->d_dma->d_mcr.mcr_flags = 0;
q->q_crp = crp;
- crd1 = crp->crp_desc;
- if (crd1 == NULL) {
- ubsecstats.hst_nodesc++;
- err = EINVAL;
- goto errout;
- }
- crd2 = crd1->crd_next;
-
- if (crd2 == NULL) {
- if (crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC) {
- maccrd = crd1;
- enccrd = NULL;
- } else if (crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC) {
- maccrd = NULL;
- enccrd = crd1;
- } else {
- ubsecstats.hst_badalg++;
- err = EINVAL;
- goto errout;
- }
- } else {
- if ((crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC) &&
- (crd2->crd_alg == CRYPTO_DES_CBC ||
- crd2->crd_alg == CRYPTO_3DES_CBC) &&
- ((crd2->crd_flags & CRD_F_ENCRYPT) == 0)) {
- maccrd = crd1;
- enccrd = crd2;
- } else if ((crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC) &&
- (crd2->crd_alg == CRYPTO_MD5_HMAC ||
- crd2->crd_alg == CRYPTO_SHA1_HMAC) &&
- (crd1->crd_flags & CRD_F_ENCRYPT)) {
- enccrd = crd1;
- maccrd = crd2;
- } else {
- /*
- * We cannot order the ubsec as requested
- */
- ubsecstats.hst_badalg++;
- err = EINVAL;
- goto errout;
- }
- }
+ csp = crypto_get_params(crp->crp_session);
- if (enccrd) {
- if (enccrd->crd_flags & CRD_F_KEY_EXPLICIT) {
- ubsec_setup_enckey(ses, enccrd->crd_alg,
- enccrd->crd_key);
+ if (csp->csp_cipher_alg != 0) {
+ if (crp->crp_cipher_key != NULL) {
+ ubsec_setup_enckey(ses, csp->csp_cipher_alg,
+ crp->crp_cipher_key);
}
- encoffset = enccrd->crd_skip;
ctx.pc_flags |= htole16(UBS_PKTCTX_ENC_3DES);
- if (enccrd->crd_flags & CRD_F_ENCRYPT) {
- q->q_flags |= UBSEC_QFLAGS_COPYOUTIV;
-
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(enccrd->crd_iv, ctx.pc_iv, 8);
- else {
- ctx.pc_iv[0] = ses->ses_iv[0];
- ctx.pc_iv[1] = ses->ses_iv[1];
- }
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(ctx.pc_iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start,
+ csp->csp_ivlen, ctx.pc_iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(ctx.pc_iv, crp->crp_iv, csp->csp_ivlen);
+ else
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen,
+ ctx.pc_iv);
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0) {
- crypto_copyback(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, 8, (caddr_t)ctx.pc_iv);
- }
- } else {
+ if (!CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
ctx.pc_flags |= htole16(UBS_PKTCTX_INBOUND);
-
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT)
- bcopy(enccrd->crd_iv, ctx.pc_iv, 8);
- else {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, 8, (caddr_t)ctx.pc_iv);
- }
}
ctx.pc_deskey[0] = ses->ses_deskey[0];
@@ -1115,15 +1067,13 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
SWAP32(ctx.pc_iv[1]);
}
- if (maccrd) {
- if (maccrd->crd_flags & CRD_F_KEY_EXPLICIT) {
- ubsec_setup_mackey(ses, maccrd->crd_alg,
- maccrd->crd_key, maccrd->crd_klen / 8);
+ if (csp->csp_auth_alg != 0) {
+ if (crp->crp_auth_key != NULL) {
+ ubsec_setup_mackey(ses, csp->csp_auth_alg,
+ crp->crp_auth_key, csp->csp_auth_klen);
}
- macoffset = maccrd->crd_skip;
-
- if (maccrd->crd_alg == CRYPTO_MD5_HMAC)
+ if (csp->csp_auth_alg == CRYPTO_MD5_HMAC)
ctx.pc_flags |= htole16(UBS_PKTCTX_AUTH_MD5);
else
ctx.pc_flags |= htole16(UBS_PKTCTX_AUTH_SHA1);
@@ -1137,35 +1087,37 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
}
}
- if (enccrd && maccrd) {
+ if (csp->csp_mode == CSP_MODE_ETA) {
/*
- * ubsec cannot handle packets where the end of encryption
- * and authentication are not the same, or where the
- * encrypted part begins before the authenticated part.
+ * ubsec only supports ETA requests where there is no
+ * gap between the AAD and payload.
*/
- if ((encoffset + enccrd->crd_len) !=
- (macoffset + maccrd->crd_len)) {
+ if (crp->crp_aad_length != 0 &&
+ crp->crp_aad_start + crp->crp_aad_length !=
+ crp->crp_payload_start) {
ubsecstats.hst_lenmismatch++;
err = EINVAL;
goto errout;
}
- if (enccrd->crd_skip < maccrd->crd_skip) {
- ubsecstats.hst_skipmismatch++;
- err = EINVAL;
- goto errout;
+
+ if (crp->crp_aad_length != 0) {
+ sskip = crp->crp_aad_start;
+ } else {
+ sskip = crp->crp_payload_start;
}
- sskip = maccrd->crd_skip;
- cpskip = dskip = enccrd->crd_skip;
- stheend = maccrd->crd_len;
- dtheend = enccrd->crd_len;
- coffset = enccrd->crd_skip - maccrd->crd_skip;
+ cpskip = dskip = crp->crp_payload_start;
+ stheend = crp->crp_aad_length + crp->crp_payload_length;
+ dtheend = crp->crp_payload_length;
+ coffset = crp->crp_aad_length;
cpoffset = cpskip + dtheend;
#ifdef UBSEC_DEBUG
if (ubsec_debug) {
- printf("mac: skip %d, len %d, inject %d\n",
- maccrd->crd_skip, maccrd->crd_len, maccrd->crd_inject);
- printf("enc: skip %d, len %d, inject %d\n",
- enccrd->crd_skip, enccrd->crd_len, enccrd->crd_inject);
+ printf("AAD: start %d, len %d, digest %d\n",
+ crp->crp_aad_start, crp->crp_aad_length,
+ crp->crp_digest_start);
+ printf("payload: start %d, len %d, IV %d\n",
+ crp->crp_payload_start, crp->crp_payload_length,
+ crp->crp_iv_start);
printf("src: skip %d, len %d\n", sskip, stheend);
printf("dst: skip %d, len %d\n", dskip, dtheend);
printf("ubs: coffset %d, pktlen %d, cpskip %d, cpoffset %d\n",
@@ -1173,8 +1125,8 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
}
#endif
} else {
- cpskip = dskip = sskip = macoffset + encoffset;
- dtheend = stheend = (enccrd)?enccrd->crd_len:maccrd->crd_len;
+ cpskip = dskip = sskip = crp->crp_payload_start;
+ dtheend = stheend = crp->crp_payload_length;
cpoffset = cpskip + dtheend;
coffset = 0;
}
@@ -1185,25 +1137,15 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
err = ENOMEM;
goto errout;
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (bus_dmamap_load_mbuf(sc->sc_dmat, q->q_src_map,
- q->q_src_m, ubsec_op_cb, &q->q_src, BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dmat, q->q_src_map);
- q->q_src_map = NULL;
- ubsecstats.hst_noload++;
- err = ENOMEM;
- goto errout;
- }
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- if (bus_dmamap_load_uio(sc->sc_dmat, q->q_src_map,
- q->q_src_io, ubsec_op_cb, &q->q_src, BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dmat, q->q_src_map);
- q->q_src_map = NULL;
- ubsecstats.hst_noload++;
- err = ENOMEM;
- goto errout;
- }
+ if (bus_dmamap_load_crp(sc->sc_dmat, q->q_src_map, crp, ubsec_op_cb,
+ &q->q_src, BUS_DMA_NOWAIT) != 0) {
+ bus_dmamap_destroy(sc->sc_dmat, q->q_src_map);
+ q->q_src_map = NULL;
+ ubsecstats.hst_noload++;
+ err = ENOMEM;
+ goto errout;
}
+ q->q_src_mapsize = ubsec_crp_length(crp);
nicealign = ubsec_dmamap_aligned(&q->q_src);
dmap->d_dma->d_mcr.mcr_pktlen = htole16(stheend);
@@ -1257,7 +1199,7 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
j++;
}
- if (enccrd == NULL && maccrd != NULL) {
+ if (csp->csp_mode == CSP_MODE_DIGEST) {
dmap->d_dma->d_mcr.mcr_opktbuf.pb_addr = 0;
dmap->d_dma->d_mcr.mcr_opktbuf.pb_len = 0;
dmap->d_dma->d_mcr.mcr_opktbuf.pb_next = htole32(dmap->d_alloc.dma_paddr +
@@ -1270,104 +1212,79 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
dmap->d_dma->d_mcr.mcr_opktbuf.pb_next);
#endif
} else {
- if (crp->crp_flags & CRYPTO_F_IOV) {
- if (!nicealign) {
- ubsecstats.hst_iovmisaligned++;
- err = EINVAL;
- goto errout;
+ if (nicealign) {
+ q->q_dst = q->q_src;
+ } else if (crp->crp_buf_type == CRYPTO_BUF_MBUF) {
+ int totlen, len;
+ struct mbuf *m, *top, **mp;
+
+ ubsecstats.hst_unaligned++;
+ totlen = q->q_src_mapsize;
+ if (totlen >= MINCLSIZE) {
+ m = m_getcl(M_NOWAIT, MT_DATA,
+ crp->crp_mbuf->m_flags & M_PKTHDR);
+ len = MCLBYTES;
+ } else if (crp->crp_mbuf->m_flags & M_PKTHDR) {
+ m = m_gethdr(M_NOWAIT, MT_DATA);
+ len = MHLEN;
+ } else {
+ m = m_get(M_NOWAIT, MT_DATA);
+ len = MLEN;
}
- if (bus_dmamap_create(sc->sc_dmat, BUS_DMA_NOWAIT,
- &q->q_dst_map)) {
- ubsecstats.hst_nomap++;
- err = ENOMEM;
- goto errout;
+ if (m && crp->crp_mbuf->m_flags & M_PKTHDR &&
+ !m_dup_pkthdr(m, crp->crp_mbuf, M_NOWAIT)) {
+ m_free(m);
+ m = NULL;
}
- if (bus_dmamap_load_uio(sc->sc_dmat, q->q_dst_map,
- q->q_dst_io, ubsec_op_cb, &q->q_dst, BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dmat, q->q_dst_map);
- q->q_dst_map = NULL;
- ubsecstats.hst_noload++;
- err = ENOMEM;
+ if (m == NULL) {
+ ubsecstats.hst_nombuf++;
+ err = sc->sc_nqueue ? ERESTART : ENOMEM;
goto errout;
}
- } else if (crp->crp_flags & CRYPTO_F_IMBUF) {
- if (nicealign) {
- q->q_dst = q->q_src;
- } else {
- int totlen, len;
- struct mbuf *m, *top, **mp;
+ m->m_len = len = min(totlen, len);
+ totlen -= len;
+ top = m;
+ mp = &top;
- ubsecstats.hst_unaligned++;
- totlen = q->q_src_mapsize;
+ while (totlen > 0) {
if (totlen >= MINCLSIZE) {
- m = m_getcl(M_NOWAIT, MT_DATA,
- q->q_src_m->m_flags & M_PKTHDR);
+ m = m_getcl(M_NOWAIT, MT_DATA, 0);
len = MCLBYTES;
- } else if (q->q_src_m->m_flags & M_PKTHDR) {
- m = m_gethdr(M_NOWAIT, MT_DATA);
- len = MHLEN;
} else {
m = m_get(M_NOWAIT, MT_DATA);
len = MLEN;
}
- if (m && q->q_src_m->m_flags & M_PKTHDR &&
- !m_dup_pkthdr(m, q->q_src_m, M_NOWAIT)) {
- m_free(m);
- m = NULL;
- }
if (m == NULL) {
+ m_freem(top);
ubsecstats.hst_nombuf++;
err = sc->sc_nqueue ? ERESTART : ENOMEM;
goto errout;
}
m->m_len = len = min(totlen, len);
totlen -= len;
- top = m;
- mp = &top;
-
- while (totlen > 0) {
- if (totlen >= MINCLSIZE) {
- m = m_getcl(M_NOWAIT,
- MT_DATA, 0);
- len = MCLBYTES;
- } else {
- m = m_get(M_NOWAIT, MT_DATA);
- len = MLEN;
- }
- if (m == NULL) {
- m_freem(top);
- ubsecstats.hst_nombuf++;
- err = sc->sc_nqueue ? ERESTART : ENOMEM;
- goto errout;
- }
- m->m_len = len = min(totlen, len);
- totlen -= len;
- *mp = m;
- mp = &m->m_next;
- }
- q->q_dst_m = top;
- ubsec_mcopy(q->q_src_m, q->q_dst_m,
- cpskip, cpoffset);
- if (bus_dmamap_create(sc->sc_dmat,
- BUS_DMA_NOWAIT, &q->q_dst_map) != 0) {
- ubsecstats.hst_nomap++;
- err = ENOMEM;
- goto errout;
- }
- if (bus_dmamap_load_mbuf(sc->sc_dmat,
- q->q_dst_map, q->q_dst_m,
- ubsec_op_cb, &q->q_dst,
- BUS_DMA_NOWAIT) != 0) {
- bus_dmamap_destroy(sc->sc_dmat,
- q->q_dst_map);
- q->q_dst_map = NULL;
- ubsecstats.hst_noload++;
- err = ENOMEM;
- goto errout;
- }
+ *mp = m;
+ mp = &m->m_next;
}
+ q->q_dst_m = top;
+ ubsec_mcopy(crp->crp_mbuf, q->q_dst_m, cpskip, cpoffset);
+ if (bus_dmamap_create(sc->sc_dmat, BUS_DMA_NOWAIT,
+ &q->q_dst_map) != 0) {
+ ubsecstats.hst_nomap++;
+ err = ENOMEM;
+ goto errout;
+ }
+ if (bus_dmamap_load_mbuf_sg(sc->sc_dmat,
+ q->q_dst_map, q->q_dst_m, q->q_dst_segs,
+ &q->q_dst_nsegs, 0) != 0) {
+ bus_dmamap_destroy(sc->sc_dmat, q->q_dst_map);
+ q->q_dst_map = NULL;
+ ubsecstats.hst_noload++;
+ err = ENOMEM;
+ goto errout;
+ }
+ q->q_dst_mapsize = q->q_src_mapsize;
} else {
- ubsecstats.hst_badflags++;
+ ubsecstats.hst_iovmisaligned++;
err = EINVAL;
goto errout;
}
@@ -1414,7 +1331,7 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
pb->pb_len = htole32(packl);
if ((i + 1) == q->q_dst_nsegs) {
- if (maccrd)
+ if (csp->csp_auth_alg != 0)
pb->pb_next = htole32(dmap->d_alloc.dma_paddr +
offsetof(struct ubsec_dmachunk, d_macbuf[0]));
else
@@ -1465,7 +1382,7 @@ ubsec_process(device_t dev, struct cryptop *crp, int hint)
errout:
if (q != NULL) {
- if ((q->q_dst_m != NULL) && (q->q_src_m != q->q_dst_m))
+ if (q->q_dst_m != NULL)
m_freem(q->q_dst_m);
if (q->q_dst_map != NULL && q->q_dst_map != q->q_src_map) {
@@ -1495,12 +1412,14 @@ errout:
static void
ubsec_callback(struct ubsec_softc *sc, struct ubsec_q *q)
{
+ const struct crypto_session_params *csp;
struct cryptop *crp = (struct cryptop *)q->q_crp;
struct ubsec_session *ses;
- struct cryptodesc *crd;
struct ubsec_dma *dmap = q->q_dma;
+ char hash[SHA1_HASH_LEN];
ses = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
ubsecstats.hst_opackets++;
ubsecstats.hst_obytes += dmap->d_alloc.dma_size;
@@ -1517,31 +1436,21 @@ ubsec_callback(struct ubsec_softc *sc, struct ubsec_q *q)
bus_dmamap_unload(sc->sc_dmat, q->q_src_map);
bus_dmamap_destroy(sc->sc_dmat, q->q_src_map);
- if ((crp->crp_flags & CRYPTO_F_IMBUF) && (q->q_src_m != q->q_dst_m)) {
- m_freem(q->q_src_m);
- crp->crp_buf = (caddr_t)q->q_dst_m;
+ if (q->q_dst_m != NULL) {
+ m_freem(crp->crp_mbuf);
+ crp->crp_mbuf = q->q_dst_m;
}
- /* copy out IV for future use */
- if (q->q_flags & UBSEC_QFLAGS_COPYOUTIV) {
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- if (crd->crd_alg != CRYPTO_DES_CBC &&
- crd->crd_alg != CRYPTO_3DES_CBC)
- continue;
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- crd->crd_skip + crd->crd_len - 8, 8,
- (caddr_t)ses->ses_iv);
- break;
- }
- }
-
- for (crd = crp->crp_desc; crd; crd = crd->crd_next) {
- if (crd->crd_alg != CRYPTO_MD5_HMAC &&
- crd->crd_alg != CRYPTO_SHA1_HMAC)
- continue;
- crypto_copyback(crp->crp_flags, crp->crp_buf, crd->crd_inject,
- ses->ses_mlen, (caddr_t)dmap->d_dma->d_macbuf);
- break;
+ if (csp->csp_auth_alg != 0) {
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start,
+ ses->ses_mlen, hash);
+ if (timingsafe_bcmp(dmap->d_dma->d_macbuf, hash,
+ ses->ses_mlen) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp, crp->crp_digest_start,
+ ses->ses_mlen, dmap->d_dma->d_macbuf);
}
mtx_lock(&sc->sc_freeqlock);
SIMPLEQ_INSERT_TAIL(&sc->sc_freequeue, q, q_next);
@@ -1942,7 +1851,7 @@ ubsec_free_q(struct ubsec_softc *sc, struct ubsec_q *q)
if(q->q_stacked_mcr[i]) {
q2 = q->q_stacked_mcr[i];
- if ((q2->q_dst_m != NULL) && (q2->q_src_m != q2->q_dst_m))
+ if (q2->q_dst_m != NULL)
m_freem(q2->q_dst_m);
crp = (struct cryptop *)q2->q_crp;
@@ -1959,7 +1868,7 @@ ubsec_free_q(struct ubsec_softc *sc, struct ubsec_q *q)
/*
* Free header MCR
*/
- if ((q->q_dst_m != NULL) && (q->q_src_m != q->q_dst_m))
+ if (q->q_dst_m != NULL)
m_freem(q->q_dst_m);
crp = (struct cryptop *)q->q_crp;
diff --git a/sys/dev/ubsec/ubsecvar.h b/sys/dev/ubsec/ubsecvar.h
index ae6d5e2cb6bc..b857061be5c0 100644
--- a/sys/dev/ubsec/ubsecvar.h
+++ b/sys/dev/ubsec/ubsecvar.h
@@ -134,10 +134,6 @@ struct ubsec_dma {
#define UBS_FLAGS_RNG 0x10 /* hardware rng */
struct ubsec_operand {
- union {
- struct mbuf *m;
- struct uio *io;
- } u;
bus_dmamap_t map;
bus_size_t mapsize;
int nsegs;
@@ -153,19 +149,16 @@ struct ubsec_q {
struct ubsec_operand q_src;
struct ubsec_operand q_dst;
+ struct mbuf *q_dst_m;
int q_flags;
};
-#define q_src_m q_src.u.m
-#define q_src_io q_src.u.io
#define q_src_map q_src.map
#define q_src_nsegs q_src.nsegs
#define q_src_segs q_src.segs
#define q_src_mapsize q_src.mapsize
-#define q_dst_m q_dst.u.m
-#define q_dst_io q_dst.u.io
#define q_dst_map q_dst.map
#define q_dst_nsegs q_dst.nsegs
#define q_dst_segs q_dst.segs
@@ -215,7 +208,6 @@ struct ubsec_session {
u_int32_t ses_mlen; /* hmac length */
u_int32_t ses_hminner[5]; /* hmac inner state */
u_int32_t ses_hmouter[5]; /* hmac outer state */
- u_int32_t ses_iv[2]; /* [3]DES iv */
};
#endif /* _KERNEL */
diff --git a/sys/geom/eli/g_eli.c b/sys/geom/eli/g_eli.c
index d14d1e7fe750..c585a665e9f5 100644
--- a/sys/geom/eli/g_eli.c
+++ b/sys/geom/eli/g_eli.c
@@ -488,41 +488,44 @@ static int
g_eli_newsession(struct g_eli_worker *wr)
{
struct g_eli_softc *sc;
- struct cryptoini crie, cria;
+ struct crypto_session_params csp;
int error;
+ void *key;
sc = wr->w_softc;
- bzero(&crie, sizeof(crie));
- crie.cri_alg = sc->sc_ealgo;
- crie.cri_klen = sc->sc_ekeylen;
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_CIPHER;
+ csp.csp_cipher_alg = sc->sc_ealgo;
+ csp.csp_ivlen = g_eli_ivlen(sc->sc_ealgo);
+ csp.csp_cipher_klen = sc->sc_ekeylen / 8;
if (sc->sc_ealgo == CRYPTO_AES_XTS)
- crie.cri_klen <<= 1;
+ csp.csp_cipher_klen <<= 1;
if ((sc->sc_flags & G_ELI_FLAG_FIRST_KEY) != 0) {
- crie.cri_key = g_eli_key_hold(sc, 0,
+ key = g_eli_key_hold(sc, 0,
LIST_FIRST(&sc->sc_geom->consumer)->provider->sectorsize);
+ csp.csp_cipher_key = key;
} else {
- crie.cri_key = sc->sc_ekey;
+ key = NULL;
+ csp.csp_cipher_key = sc->sc_ekey;
}
if (sc->sc_flags & G_ELI_FLAG_AUTH) {
- bzero(&cria, sizeof(cria));
- cria.cri_alg = sc->sc_aalgo;
- cria.cri_klen = sc->sc_akeylen;
- cria.cri_key = sc->sc_akey;
- crie.cri_next = &cria;
+ csp.csp_mode = CSP_MODE_ETA;
+ csp.csp_auth_alg = sc->sc_aalgo;
+ csp.csp_auth_klen = G_ELI_AUTH_SECKEYLEN;
}
switch (sc->sc_crypto) {
case G_ELI_CRYPTO_SW:
- error = crypto_newsession(&wr->w_sid, &crie,
+ error = crypto_newsession(&wr->w_sid, &csp,
CRYPTOCAP_F_SOFTWARE);
break;
case G_ELI_CRYPTO_HW:
- error = crypto_newsession(&wr->w_sid, &crie,
+ error = crypto_newsession(&wr->w_sid, &csp,
CRYPTOCAP_F_HARDWARE);
break;
case G_ELI_CRYPTO_UNKNOWN:
- error = crypto_newsession(&wr->w_sid, &crie,
+ error = crypto_newsession(&wr->w_sid, &csp,
CRYPTOCAP_F_HARDWARE);
if (error == 0) {
mtx_lock(&sc->sc_queue_mtx);
@@ -530,7 +533,7 @@ g_eli_newsession(struct g_eli_worker *wr)
sc->sc_crypto = G_ELI_CRYPTO_HW;
mtx_unlock(&sc->sc_queue_mtx);
} else {
- error = crypto_newsession(&wr->w_sid, &crie,
+ error = crypto_newsession(&wr->w_sid, &csp,
CRYPTOCAP_F_SOFTWARE);
mtx_lock(&sc->sc_queue_mtx);
if (sc->sc_crypto == G_ELI_CRYPTO_UNKNOWN)
@@ -542,8 +545,12 @@ g_eli_newsession(struct g_eli_worker *wr)
panic("%s: invalid condition", __func__);
}
- if ((sc->sc_flags & G_ELI_FLAG_FIRST_KEY) != 0)
- g_eli_key_drop(sc, crie.cri_key);
+ if ((sc->sc_flags & G_ELI_FLAG_FIRST_KEY) != 0) {
+ if (error)
+ g_eli_key_drop(sc, key);
+ else
+ wr->w_first_key = key;
+ }
return (error);
}
@@ -551,8 +558,14 @@ g_eli_newsession(struct g_eli_worker *wr)
static void
g_eli_freesession(struct g_eli_worker *wr)
{
+ struct g_eli_softc *sc;
crypto_freesession(wr->w_sid);
+ if (wr->w_first_key != NULL) {
+ sc = wr->w_softc;
+ g_eli_key_drop(sc, wr->w_first_key);
+ wr->w_first_key = NULL;
+ }
}
static void
diff --git a/sys/geom/eli/g_eli.h b/sys/geom/eli/g_eli.h
index e387782d949b..dab9d13ccff7 100644
--- a/sys/geom/eli/g_eli.h
+++ b/sys/geom/eli/g_eli.h
@@ -163,6 +163,7 @@ extern u_int g_eli_batch;
struct g_eli_worker {
struct g_eli_softc *w_softc;
struct proc *w_proc;
+ void *w_first_key;
u_int w_number;
crypto_session_t w_sid;
boolean_t w_active;
@@ -574,6 +575,25 @@ g_eli_keylen(u_int algo, u_int keylen)
}
static __inline u_int
+g_eli_ivlen(u_int algo)
+{
+
+ switch (algo) {
+ case CRYPTO_AES_XTS:
+ return (AES_XTS_IV_LEN);
+ case CRYPTO_AES_CBC:
+ return (AES_BLOCK_LEN);
+ case CRYPTO_BLF_CBC:
+ return (BLOWFISH_BLOCK_LEN);
+ case CRYPTO_CAMELLIA_CBC:
+ return (CAMELLIA_BLOCK_LEN);
+ case CRYPTO_3DES_CBC:
+ return (DES3_BLOCK_LEN);
+ }
+ return (0);
+}
+
+static __inline u_int
g_eli_hashlen(u_int algo)
{
diff --git a/sys/geom/eli/g_eli_crypto.c b/sys/geom/eli/g_eli_crypto.c
index 41918f8898ca..ae88dadcaff7 100644
--- a/sys/geom/eli/g_eli_crypto.c
+++ b/sys/geom/eli/g_eli_crypto.c
@@ -61,50 +61,40 @@ static int
g_eli_crypto_cipher(u_int algo, int enc, u_char *data, size_t datasize,
const u_char *key, size_t keysize)
{
- struct cryptoini cri;
+ struct crypto_session_params csp;
struct cryptop *crp;
- struct cryptodesc *crd;
crypto_session_t sid;
- u_char *p;
int error;
KASSERT(algo != CRYPTO_AES_XTS,
("%s: CRYPTO_AES_XTS unexpected here", __func__));
- bzero(&cri, sizeof(cri));
- cri.cri_alg = algo;
- cri.cri_key = __DECONST(void *, key);
- cri.cri_klen = keysize;
- error = crypto_newsession(&sid, &cri, CRYPTOCAP_F_SOFTWARE);
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_CIPHER;
+ csp.csp_cipher_alg = algo;
+ csp.csp_ivlen = g_eli_ivlen(algo);
+ csp.csp_cipher_key = key;
+ csp.csp_cipher_klen = keysize / 8;
+ error = crypto_newsession(&sid, &csp, CRYPTOCAP_F_SOFTWARE);
if (error != 0)
return (error);
- p = malloc(sizeof(*crp) + sizeof(*crd), M_ELI, M_NOWAIT | M_ZERO);
- if (p == NULL) {
+ crp = crypto_getreq(sid, M_NOWAIT);
+ if (crp == NULL) {
crypto_freesession(sid);
return (ENOMEM);
}
- crp = (struct cryptop *)p; p += sizeof(*crp);
- crd = (struct cryptodesc *)p; p += sizeof(*crd);
-
- crd->crd_skip = 0;
- crd->crd_len = datasize;
- crd->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT;
- if (enc)
- crd->crd_flags |= CRD_F_ENCRYPT;
- crd->crd_alg = algo;
- crd->crd_key = __DECONST(void *, key);
- crd->crd_klen = keysize;
- bzero(crd->crd_iv, sizeof(crd->crd_iv));
- crd->crd_next = NULL;
-
- crp->crp_session = sid;
- crp->crp_ilen = datasize;
- crp->crp_olen = datasize;
+
+ crp->crp_payload_start = 0;
+ crp->crp_payload_length = datasize;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC | CRYPTO_F_IV_SEPARATE;
+ crp->crp_op = enc ? CRYPTO_OP_ENCRYPT : CRYPTO_OP_DECRYPT;
+ memset(crp->crp_iv, 0, sizeof(crp->crp_iv));
+
crp->crp_opaque = NULL;
crp->crp_callback = g_eli_crypto_done;
+ crp->crp_buf_type = CRYPTO_BUF_CONTIG;
+ crp->crp_ilen = datasize;
crp->crp_buf = (void *)data;
- crp->crp_flags = CRYPTO_F_CBIFSYNC;
- crp->crp_desc = crd;
error = crypto_dispatch(crp);
if (error == 0) {
@@ -113,7 +103,7 @@ g_eli_crypto_cipher(u_int algo, int enc, u_char *data, size_t datasize,
error = crp->crp_etype;
}
- free(crp, M_ELI);
+ crypto_freereq(crp);
crypto_freesession(sid);
return (error);
}
diff --git a/sys/geom/eli/g_eli_integrity.c b/sys/geom/eli/g_eli_integrity.c
index a58e9f4358f6..7ec1b5662e6e 100644
--- a/sys/geom/eli/g_eli_integrity.c
+++ b/sys/geom/eli/g_eli_integrity.c
@@ -140,31 +140,51 @@ g_eli_auth_read_done(struct cryptop *crp)
}
bp = (struct bio *)crp->crp_opaque;
bp->bio_inbed++;
+ sc = bp->bio_to->geom->softc;
if (crp->crp_etype == 0) {
- bp->bio_completed += crp->crp_olen;
- G_ELI_DEBUG(3, "Crypto READ request done (%d/%d) (add=%jd completed=%jd).",
- bp->bio_inbed, bp->bio_children, (intmax_t)crp->crp_olen, (intmax_t)bp->bio_completed);
+ bp->bio_completed += crp->crp_payload_length;
+ G_ELI_DEBUG(3, "Crypto READ request done (%d/%d) (add=%d completed=%jd).",
+ bp->bio_inbed, bp->bio_children, crp->crp_payload_length, (intmax_t)bp->bio_completed);
} else {
- G_ELI_DEBUG(1, "Crypto READ request failed (%d/%d) error=%d.",
+ u_int nsec, decr_secsize, encr_secsize, rel_sec;
+ int *errorp;
+
+ /* Sectorsize of decrypted provider eg. 4096. */
+ decr_secsize = bp->bio_to->sectorsize;
+ /* The real sectorsize of encrypted provider, eg. 512. */
+ encr_secsize =
+ LIST_FIRST(&sc->sc_geom->consumer)->provider->sectorsize;
+ /* Number of sectors from decrypted provider, eg. 2. */
+ nsec = bp->bio_length / decr_secsize;
+ /* Number of sectors from encrypted provider, eg. 18. */
+ nsec = (nsec * sc->sc_bytes_per_sector) / encr_secsize;
+ /* Which relative sector this request decrypted. */
+ rel_sec = ((crp->crp_buf + crp->crp_payload_start) -
+ (char *)bp->bio_driver2) / encr_secsize;
+
+ errorp = (int *)((char *)bp->bio_driver2 + encr_secsize * nsec +
+ sizeof(int) * rel_sec);
+ *errorp = crp->crp_etype;
+ G_ELI_DEBUG(1,
+ "Crypto READ request failed (%d/%d) error=%d.",
bp->bio_inbed, bp->bio_children, crp->crp_etype);
- if (bp->bio_error == 0)
- bp->bio_error = crp->crp_etype;
+ if (bp->bio_error == 0 || bp->bio_error == EINTEGRITY)
+ bp->bio_error = crp->crp_etype == EBADMSG ?
+ EINTEGRITY : crp->crp_etype;
}
- sc = bp->bio_to->geom->softc;
- g_eli_key_drop(sc, crp->crp_desc->crd_next->crd_key);
+ if (crp->crp_cipher_key != NULL)
+ g_eli_key_drop(sc, __DECONST(void *, crp->crp_cipher_key));
+ crypto_freereq(crp);
/*
* Do we have all sectors already?
*/
if (bp->bio_inbed < bp->bio_children)
return (0);
+
if (bp->bio_error == 0) {
u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
- u_char *srcdata, *dstdata, *auth;
- off_t coroff, corsize;
+ u_char *srcdata, *dstdata;
- /*
- * Verify data integrity based on calculated and read HMACs.
- */
/* Sectorsize of decrypted provider eg. 4096. */
decr_secsize = bp->bio_to->sectorsize;
/* The real sectorsize of encrypted provider, eg. 512. */
@@ -180,30 +200,54 @@ g_eli_auth_read_done(struct cryptop *crp)
srcdata = bp->bio_driver2;
dstdata = bp->bio_data;
- auth = srcdata + encr_secsize * nsec;
+
+ for (i = 1; i <= nsec; i++) {
+ data_secsize = sc->sc_data_per_sector;
+ if ((i % lsec) == 0)
+ data_secsize = decr_secsize % data_secsize;
+ bcopy(srcdata + sc->sc_alen, dstdata, data_secsize);
+ srcdata += encr_secsize;
+ dstdata += data_secsize;
+ }
+ } else if (bp->bio_error == EINTEGRITY) {
+ u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
+ int *errorp;
+ off_t coroff, corsize, dstoff;
+
+ /* Sectorsize of decrypted provider eg. 4096. */
+ decr_secsize = bp->bio_to->sectorsize;
+ /* The real sectorsize of encrypted provider, eg. 512. */
+ encr_secsize = LIST_FIRST(&sc->sc_geom->consumer)->provider->sectorsize;
+ /* Number of data bytes in one encrypted sector, eg. 480. */
+ data_secsize = sc->sc_data_per_sector;
+ /* Number of sectors from decrypted provider, eg. 2. */
+ nsec = bp->bio_length / decr_secsize;
+ /* Number of sectors from encrypted provider, eg. 18. */
+ nsec = (nsec * sc->sc_bytes_per_sector) / encr_secsize;
+ /* Last sector number in every big sector, eg. 9. */
+ lsec = sc->sc_bytes_per_sector / encr_secsize;
+
+ errorp = (int *)((char *)bp->bio_driver2 + encr_secsize * nsec);
coroff = -1;
corsize = 0;
+ dstoff = bp->bio_offset;
for (i = 1; i <= nsec; i++) {
data_secsize = sc->sc_data_per_sector;
if ((i % lsec) == 0)
data_secsize = decr_secsize % data_secsize;
- if (bcmp(srcdata, auth, sc->sc_alen) != 0) {
+ if (errorp[i - 1] == EBADMSG) {
/*
- * Curruption detected, remember the offset if
+ * Corruption detected, remember the offset if
* this is the first corrupted sector and
* increase size.
*/
- if (bp->bio_error == 0)
- bp->bio_error = -1;
- if (coroff == -1) {
- coroff = bp->bio_offset +
- (dstdata - (u_char *)bp->bio_data);
- }
+ if (coroff == -1)
+ coroff = dstoff;
corsize += data_secsize;
} else {
/*
- * No curruption, good.
+ * No corruption, good.
* Report previous corruption if there was one.
*/
if (coroff != -1) {
@@ -214,12 +258,8 @@ g_eli_auth_read_done(struct cryptop *crp)
coroff = -1;
corsize = 0;
}
- bcopy(srcdata + sc->sc_alen, dstdata,
- data_secsize);
}
- srcdata += encr_secsize;
- dstdata += data_secsize;
- auth += sc->sc_alen;
+ dstoff += data_secsize;
}
/* Report previous corruption if there was one. */
if (coroff != -1) {
@@ -231,9 +271,7 @@ g_eli_auth_read_done(struct cryptop *crp)
free(bp->bio_driver2, M_ELI);
bp->bio_driver2 = NULL;
if (bp->bio_error != 0) {
- if (bp->bio_error == -1)
- bp->bio_error = EINTEGRITY;
- else {
+ if (bp->bio_error != EINTEGRITY) {
G_ELI_LOGREQ(0, bp,
"Crypto READ request failed (error=%d).",
bp->bio_error);
@@ -277,7 +315,9 @@ g_eli_auth_write_done(struct cryptop *crp)
bp->bio_error = crp->crp_etype;
}
sc = bp->bio_to->geom->softc;
- g_eli_key_drop(sc, crp->crp_desc->crd_key);
+ if (crp->crp_cipher_key != NULL)
+ g_eli_key_drop(sc, __DECONST(void *, crp->crp_cipher_key));
+ crypto_freereq(crp);
/*
* All sectors are already encrypted?
*/
@@ -361,14 +401,16 @@ g_eli_auth_read(struct g_eli_softc *sc, struct bio *bp)
cbp->bio_length = cp->provider->sectorsize * nsec;
size = cbp->bio_length;
- size += sc->sc_alen * nsec;
- size += sizeof(struct cryptop) * nsec;
- size += sizeof(struct cryptodesc) * nsec * 2;
+ size += sizeof(int) * nsec;
size += G_ELI_AUTH_SECKEYLEN * nsec;
cbp->bio_offset = (bp->bio_offset / bp->bio_to->sectorsize) * sc->sc_bytes_per_sector;
bp->bio_driver2 = malloc(size, M_ELI, M_WAITOK);
cbp->bio_data = bp->bio_driver2;
+ /* Clear the error array. */
+ memset((char *)bp->bio_driver2 + cbp->bio_length, 0,
+ sizeof(int) * nsec);
+
/*
* We read more than what is requested, so we have to be ready to read
* more than MAXPHYS.
@@ -408,10 +450,9 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
{
struct g_eli_softc *sc;
struct cryptop *crp;
- struct cryptodesc *crde, *crda;
u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
off_t dstoff;
- u_char *p, *data, *auth, *authkey, *plaindata;
+ u_char *p, *data, *authkey, *plaindata;
int error;
G_ELI_LOGREQ(3, bp, "%s", __func__);
@@ -433,19 +474,15 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
/* Destination offset, used for IV generation. */
dstoff = (bp->bio_offset / bp->bio_to->sectorsize) * sc->sc_bytes_per_sector;
- auth = NULL; /* Silence compiler warning. */
plaindata = bp->bio_data;
if (bp->bio_cmd == BIO_READ) {
data = bp->bio_driver2;
- auth = data + encr_secsize * nsec;
- p = auth + sc->sc_alen * nsec;
+ p = data + encr_secsize * nsec;
+ p += sizeof(int) * nsec;
} else {
size_t size;
size = encr_secsize * nsec;
- size += sizeof(*crp) * nsec;
- size += sizeof(*crde) * nsec;
- size += sizeof(*crda) * nsec;
size += G_ELI_AUTH_SECKEYLEN * nsec;
size += sizeof(uintptr_t); /* Space for alignment. */
data = malloc(size, M_ELI, M_WAITOK);
@@ -460,9 +497,7 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
#endif
for (i = 1; i <= nsec; i++, dstoff += encr_secsize) {
- crp = (struct cryptop *)p; p += sizeof(*crp);
- crde = (struct cryptodesc *)p; p += sizeof(*crde);
- crda = (struct cryptodesc *)p; p += sizeof(*crda);
+ crp = crypto_getreq(wr->w_sid, M_WAITOK);
authkey = (u_char *)p; p += G_ELI_AUTH_SECKEYLEN;
data_secsize = sc->sc_data_per_sector;
@@ -477,21 +512,14 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
encr_secsize - sc->sc_alen - data_secsize);
}
- if (bp->bio_cmd == BIO_READ) {
- /* Remember read HMAC. */
- bcopy(data, auth, sc->sc_alen);
- auth += sc->sc_alen;
- /* TODO: bzero(9) can be commented out later. */
- bzero(data, sc->sc_alen);
- } else {
+ if (bp->bio_cmd == BIO_WRITE) {
bcopy(plaindata, data + sc->sc_alen, data_secsize);
plaindata += data_secsize;
}
- crp->crp_session = wr->w_sid;
crp->crp_ilen = sc->sc_alen + data_secsize;
- crp->crp_olen = data_secsize;
crp->crp_opaque = (void *)bp;
+ crp->crp_buf_type = CRYPTO_BUF_CONTIG;
crp->crp_buf = (void *)data;
data += encr_secsize;
crp->crp_flags = CRYPTO_F_CBIFSYNC;
@@ -499,41 +527,28 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
crp->crp_flags |= CRYPTO_F_BATCH;
if (bp->bio_cmd == BIO_WRITE) {
crp->crp_callback = g_eli_auth_write_done;
- crp->crp_desc = crde;
- crde->crd_next = crda;
- crda->crd_next = NULL;
+ crp->crp_op = CRYPTO_OP_ENCRYPT |
+ CRYPTO_OP_COMPUTE_DIGEST;
} else {
crp->crp_callback = g_eli_auth_read_done;
- crp->crp_desc = crda;
- crda->crd_next = crde;
- crde->crd_next = NULL;
+ crp->crp_op = CRYPTO_OP_DECRYPT |
+ CRYPTO_OP_VERIFY_DIGEST;
+ }
+
+ crp->crp_digest_start = 0;
+ crp->crp_payload_start = sc->sc_alen;
+ crp->crp_payload_length = data_secsize;
+ crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
+ if ((sc->sc_flags & G_ELI_FLAG_FIRST_KEY) == 0) {
+ crp->crp_cipher_key = g_eli_key_hold(sc, dstoff,
+ encr_secsize);
}
+ g_eli_crypto_ivgen(sc, dstoff, crp->crp_iv,
+ sizeof(crp->crp_iv));
- crde->crd_skip = sc->sc_alen;
- crde->crd_len = data_secsize;
- crde->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT;
- if ((sc->sc_flags & G_ELI_FLAG_FIRST_KEY) == 0)
- crde->crd_flags |= CRD_F_KEY_EXPLICIT;
- if (bp->bio_cmd == BIO_WRITE)
- crde->crd_flags |= CRD_F_ENCRYPT;
- crde->crd_alg = sc->sc_ealgo;
- crde->crd_key = g_eli_key_hold(sc, dstoff, encr_secsize);
- crde->crd_klen = sc->sc_ekeylen;
- if (sc->sc_ealgo == CRYPTO_AES_XTS)
- crde->crd_klen <<= 1;
- g_eli_crypto_ivgen(sc, dstoff, crde->crd_iv,
- sizeof(crde->crd_iv));
-
- crda->crd_skip = sc->sc_alen;
- crda->crd_len = data_secsize;
- crda->crd_inject = 0;
- crda->crd_flags = CRD_F_KEY_EXPLICIT;
- crda->crd_alg = sc->sc_aalgo;
g_eli_auth_keygen(sc, dstoff, authkey);
- crda->crd_key = authkey;
- crda->crd_klen = G_ELI_AUTH_SECKEYLEN * 8;
+ crp->crp_auth_key = authkey;
- crp->crp_etype = 0;
error = crypto_dispatch(crp);
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
error));
diff --git a/sys/geom/eli/g_eli_privacy.c b/sys/geom/eli/g_eli_privacy.c
index 0a9e809e8b35..bfa1b800266b 100644
--- a/sys/geom/eli/g_eli_privacy.c
+++ b/sys/geom/eli/g_eli_privacy.c
@@ -82,7 +82,7 @@ g_eli_crypto_read_done(struct cryptop *crp)
if (crp->crp_etype == 0) {
G_ELI_DEBUG(3, "Crypto READ request done (%d/%d).",
bp->bio_inbed, bp->bio_children);
- bp->bio_completed += crp->crp_olen;
+ bp->bio_completed += crp->crp_ilen;
} else {
G_ELI_DEBUG(1, "Crypto READ request failed (%d/%d) error=%d.",
bp->bio_inbed, bp->bio_children, crp->crp_etype);
@@ -90,8 +90,9 @@ g_eli_crypto_read_done(struct cryptop *crp)
bp->bio_error = crp->crp_etype;
}
sc = bp->bio_to->geom->softc;
- if (sc != NULL)
- g_eli_key_drop(sc, crp->crp_desc->crd_key);
+ if (sc != NULL && crp->crp_cipher_key != NULL)
+ g_eli_key_drop(sc, __DECONST(void *, crp->crp_cipher_key));
+ crypto_freereq(crp);
/*
* Do we have all sectors already?
*/
@@ -143,7 +144,9 @@ g_eli_crypto_write_done(struct cryptop *crp)
}
gp = bp->bio_to->geom;
sc = gp->softc;
- g_eli_key_drop(sc, crp->crp_desc->crd_key);
+ if (crp->crp_cipher_key != NULL)
+ g_eli_key_drop(sc, __DECONST(void *, crp->crp_cipher_key));
+ crypto_freereq(crp);
/*
* All sectors are already encrypted?
*/
@@ -233,11 +236,9 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
{
struct g_eli_softc *sc;
struct cryptop *crp;
- struct cryptodesc *crd;
u_int i, nsec, secsize;
off_t dstoff;
- size_t size;
- u_char *p, *data;
+ u_char *data;
int error;
G_ELI_LOGREQ(3, bp, "%s", __func__);
@@ -247,71 +248,49 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
secsize = LIST_FIRST(&sc->sc_geom->provider)->sectorsize;
nsec = bp->bio_length / secsize;
- /*
- * Calculate how much memory do we need.
- * We need separate crypto operation for every single sector.
- * It is much faster to calculate total amount of needed memory here and
- * do the allocation once instead of allocating memory in pieces (many,
- * many pieces).
- */
- size = sizeof(*crp) * nsec;
- size += sizeof(*crd) * nsec;
+ bp->bio_inbed = 0;
+ bp->bio_children = nsec;
+
/*
* If we write the data we cannot destroy current bio_data content,
* so we need to allocate more memory for encrypted data.
*/
- if (bp->bio_cmd == BIO_WRITE)
- size += bp->bio_length;
- p = malloc(size, M_ELI, M_WAITOK);
-
- bp->bio_inbed = 0;
- bp->bio_children = nsec;
- bp->bio_driver2 = p;
-
- if (bp->bio_cmd == BIO_READ)
- data = bp->bio_data;
- else {
- data = p;
- p += bp->bio_length;
+ if (bp->bio_cmd == BIO_WRITE) {
+ data = malloc(bp->bio_length, M_ELI, M_WAITOK);
+ bp->bio_driver2 = data;
bcopy(bp->bio_data, data, bp->bio_length);
- }
+ } else
+ data = bp->bio_data;
for (i = 0, dstoff = bp->bio_offset; i < nsec; i++, dstoff += secsize) {
- crp = (struct cryptop *)p; p += sizeof(*crp);
- crd = (struct cryptodesc *)p; p += sizeof(*crd);
+ crp = crypto_getreq(wr->w_sid, M_WAITOK);
- crp->crp_session = wr->w_sid;
crp->crp_ilen = secsize;
- crp->crp_olen = secsize;
crp->crp_opaque = (void *)bp;
+ crp->crp_buf_type = CRYPTO_BUF_CONTIG;
crp->crp_buf = (void *)data;
data += secsize;
- if (bp->bio_cmd == BIO_WRITE)
+ if (bp->bio_cmd == BIO_WRITE) {
+ crp->crp_op = CRYPTO_OP_ENCRYPT;
crp->crp_callback = g_eli_crypto_write_done;
- else /* if (bp->bio_cmd == BIO_READ) */
+ } else /* if (bp->bio_cmd == BIO_READ) */ {
+ crp->crp_op = CRYPTO_OP_DECRYPT;
crp->crp_callback = g_eli_crypto_read_done;
+ }
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (g_eli_batch)
crp->crp_flags |= CRYPTO_F_BATCH;
- crp->crp_desc = crd;
- crd->crd_skip = 0;
- crd->crd_len = secsize;
- crd->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT;
- if ((sc->sc_flags & G_ELI_FLAG_SINGLE_KEY) == 0)
- crd->crd_flags |= CRD_F_KEY_EXPLICIT;
- if (bp->bio_cmd == BIO_WRITE)
- crd->crd_flags |= CRD_F_ENCRYPT;
- crd->crd_alg = sc->sc_ealgo;
- crd->crd_key = g_eli_key_hold(sc, dstoff, secsize);
- crd->crd_klen = sc->sc_ekeylen;
- if (sc->sc_ealgo == CRYPTO_AES_XTS)
- crd->crd_klen <<= 1;
- g_eli_crypto_ivgen(sc, dstoff, crd->crd_iv,
- sizeof(crd->crd_iv));
- crd->crd_next = NULL;
+ crp->crp_payload_start = 0;
+ crp->crp_payload_length = secsize;
+ crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
+ if ((sc->sc_flags & G_ELI_FLAG_SINGLE_KEY) == 0) {
+ crp->crp_cipher_key = g_eli_key_hold(sc, dstoff,
+ secsize);
+ }
+ g_eli_crypto_ivgen(sc, dstoff, crp->crp_iv,
+ sizeof(crp->crp_iv));
- crp->crp_etype = 0;
error = crypto_dispatch(crp);
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
error));
diff --git a/sys/kern/subr_bus_dma.c b/sys/kern/subr_bus_dma.c
index b050c00dfde2..92e178db6289 100644
--- a/sys/kern/subr_bus_dma.c
+++ b/sys/kern/subr_bus_dma.c
@@ -54,6 +54,8 @@ __FBSDID("$FreeBSD$");
#include <cam/cam.h>
#include <cam/cam_ccb.h>
+#include <opencrypto/cryptodev.h>
+
#include <machine/bus.h>
/*
@@ -635,3 +637,52 @@ bus_dmamap_load_mem(bus_dma_tag_t dmat, bus_dmamap_t map,
return (0);
}
+
+int
+bus_dmamap_load_crp(bus_dma_tag_t dmat, bus_dmamap_t map, struct cryptop *crp,
+ bus_dmamap_callback_t *callback, void *callback_arg, int flags)
+{
+ bus_dma_segment_t *segs;
+ int error;
+ int nsegs;
+
+ flags |= BUS_DMA_NOWAIT;
+ nsegs = -1;
+ error = 0;
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_CONTIG:
+ error = _bus_dmamap_load_buffer(dmat, map, crp->crp_buf,
+ crp->crp_ilen, kernel_pmap, flags, NULL, &nsegs);
+ break;
+ case CRYPTO_BUF_MBUF:
+ error = _bus_dmamap_load_mbuf_sg(dmat, map, crp->crp_mbuf,
+ NULL, &nsegs, flags);
+ break;
+ case CRYPTO_BUF_UIO:
+ error = _bus_dmamap_load_uio(dmat, map, crp->crp_uio, &nsegs,
+ flags);
+ break;
+ }
+ nsegs++;
+
+ CTR5(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d nsegs %d",
+ __func__, dmat, flags, error, nsegs);
+
+ if (error == EINPROGRESS)
+ return (error);
+
+ segs = _bus_dmamap_complete(dmat, map, NULL, nsegs, error);
+ if (error)
+ (*callback)(callback_arg, segs, 0, error);
+ else
+ (*callback)(callback_arg, segs, nsegs, 0);
+
+ /*
+ * Return ENOMEM to the caller so that it can pass it up the stack.
+ * This error only happens when NOWAIT is set, so deferral is disabled.
+ */
+ if (error == ENOMEM)
+ return (error);
+
+ return (0);
+}
diff --git a/sys/kern/uipc_ktls.c b/sys/kern/uipc_ktls.c
index 98e4dfb4f47a..5275ffc2107e 100644
--- a/sys/kern/uipc_ktls.c
+++ b/sys/kern/uipc_ktls.c
@@ -437,9 +437,12 @@ ktls_create_session(struct socket *so, struct tls_enable *en,
*/
switch (en->auth_algorithm) {
case 0:
+#ifdef COMPAT_FREEBSD12
+ /* XXX: Really 13.0-current COMPAT. */
case CRYPTO_AES_128_NIST_GMAC:
case CRYPTO_AES_192_NIST_GMAC:
case CRYPTO_AES_256_NIST_GMAC:
+#endif
break;
default:
return (EINVAL);
diff --git a/sys/kgssapi/krb5/kcrypto_aes.c b/sys/kgssapi/krb5/kcrypto_aes.c
index 9d0f98c06ea1..54c5b06d6919 100644
--- a/sys/kgssapi/krb5/kcrypto_aes.c
+++ b/sys/kgssapi/krb5/kcrypto_aes.c
@@ -77,7 +77,7 @@ aes_set_key(struct krb5_key_state *ks, const void *in)
{
void *kp = ks->ks_key;
struct aes_state *as = ks->ks_priv;
- struct cryptoini cri;
+ struct crypto_session_params csp;
if (kp != in)
bcopy(in, kp, ks->ks_class->ec_keylen);
@@ -90,22 +90,22 @@ aes_set_key(struct krb5_key_state *ks, const void *in)
/*
* We only want the first 96 bits of the HMAC.
*/
- bzero(&cri, sizeof(cri));
- cri.cri_alg = CRYPTO_SHA1_HMAC;
- cri.cri_klen = ks->ks_class->ec_keybits;
- cri.cri_mlen = 12;
- cri.cri_key = ks->ks_key;
- cri.cri_next = NULL;
- crypto_newsession(&as->as_session_sha1, &cri,
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_DIGEST;
+ csp.csp_auth_alg = CRYPTO_SHA1_HMAC;
+ csp.csp_auth_klen = ks->ks_class->ec_keybits / 8;
+ csp.csp_auth_mlen = 12;
+ csp.csp_auth_key = ks->ks_key;
+ crypto_newsession(&as->as_session_sha1, &csp,
CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE);
- bzero(&cri, sizeof(cri));
- cri.cri_alg = CRYPTO_AES_CBC;
- cri.cri_klen = ks->ks_class->ec_keybits;
- cri.cri_mlen = 0;
- cri.cri_key = ks->ks_key;
- cri.cri_next = NULL;
- crypto_newsession(&as->as_session_aes, &cri,
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_CIPHER;
+ csp.csp_cipher_alg = CRYPTO_AES_CBC;
+ csp.csp_cipher_klen = ks->ks_class->ec_keybits / 8;
+ csp.csp_cipher_key = ks->ks_key;
+ csp.csp_ivlen = 16;
+ crypto_newsession(&as->as_session_aes, &csp,
CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE);
}
@@ -138,31 +138,27 @@ aes_crypto_cb(struct cryptop *crp)
static void
aes_encrypt_1(const struct krb5_key_state *ks, int buftype, void *buf,
- size_t skip, size_t len, void *ivec, int encdec)
+ size_t skip, size_t len, void *ivec, bool encrypt)
{
struct aes_state *as = ks->ks_priv;
struct cryptop *crp;
- struct cryptodesc *crd;
int error;
- crp = crypto_getreq(1);
- crd = crp->crp_desc;
+ crp = crypto_getreq(as->as_session_aes, M_WAITOK);
- crd->crd_skip = skip;
- crd->crd_len = len;
- crd->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT | encdec;
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = len;
+ crp->crp_op = encrypt ? CRYPTO_OP_ENCRYPT : CRYPTO_OP_DECRYPT;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC | CRYPTO_F_IV_SEPARATE;
if (ivec) {
- bcopy(ivec, crd->crd_iv, 16);
+ memcpy(crp->crp_iv, ivec, 16);
} else {
- bzero(crd->crd_iv, 16);
+ memset(crp->crp_iv, 0, 16);
}
- crd->crd_next = NULL;
- crd->crd_alg = CRYPTO_AES_CBC;
- crp->crp_session = as->as_session_aes;
- crp->crp_flags = buftype | CRYPTO_F_CBIFSYNC;
+ crp->crp_buf_type = buftype;
crp->crp_buf = buf;
- crp->crp_opaque = (void *) as;
+ crp->crp_opaque = as;
crp->crp_callback = aes_crypto_cb;
error = crypto_dispatch(crp);
@@ -204,16 +200,16 @@ aes_encrypt(const struct krb5_key_state *ks, struct mbuf *inout,
/*
* Note: caller will ensure len >= blocklen.
*/
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec,
- CRD_F_ENCRYPT);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec,
+ true);
} else if (plen == 0) {
/*
* This is equivalent to CBC mode followed by swapping
* the last two blocks. We assume that neither of the
* last two blocks cross iov boundaries.
*/
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec,
- CRD_F_ENCRYPT);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec,
+ true);
off = skip + len - 2 * blocklen;
m_copydata(inout, off, 2 * blocklen, (void*) &last2);
m_copyback(inout, off, blocklen, last2.cn);
@@ -227,8 +223,8 @@ aes_encrypt(const struct krb5_key_state *ks, struct mbuf *inout,
* the encrypted versions of the last two blocks, we
* reshuffle to create the final result.
*/
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len - plen,
- ivec, CRD_F_ENCRYPT);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len - plen,
+ ivec, true);
/*
* Copy out the last two blocks, pad the last block
@@ -241,8 +237,8 @@ aes_encrypt(const struct krb5_key_state *ks, struct mbuf *inout,
m_copydata(inout, off, blocklen + plen, (void*) &last2);
for (i = plen; i < blocklen; i++)
last2.cn[i] = 0;
- aes_encrypt_1(ks, 0, last2.cn, 0, blocklen, last2.cn_1,
- CRD_F_ENCRYPT);
+ aes_encrypt_1(ks, CRYPTO_BUF_CONTIG, last2.cn, 0, blocklen,
+ last2.cn_1, true);
m_copyback(inout, off, blocklen, last2.cn);
m_copyback(inout, off + blocklen, plen, last2.cn_1);
}
@@ -274,7 +270,8 @@ aes_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
/*
* Note: caller will ensure len >= blocklen.
*/
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec, 0);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec,
+ false);
} else if (plen == 0) {
/*
* This is equivalent to CBC mode followed by swapping
@@ -284,7 +281,8 @@ aes_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
m_copydata(inout, off, 2 * blocklen, (void*) &last2);
m_copyback(inout, off, blocklen, last2.cn);
m_copyback(inout, off + blocklen, blocklen, last2.cn_1);
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec, 0);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec,
+ false);
} else {
/*
* This is the difficult case. We first decrypt the
@@ -298,8 +296,8 @@ aes_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
* decrypted with the rest in CBC mode.
*/
off = skip + len - plen - blocklen;
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, off, blocklen,
- NULL, 0);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, off, blocklen,
+ NULL, false);
m_copydata(inout, off, blocklen + plen, (void*) &last2);
for (i = 0; i < plen; i++) {
@@ -309,8 +307,8 @@ aes_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
}
m_copyback(inout, off, blocklen + plen, (void*) &last2);
- aes_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len - plen,
- ivec, 0);
+ aes_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len - plen,
+ ivec, false);
}
}
@@ -321,26 +319,17 @@ aes_checksum(const struct krb5_key_state *ks, int usage,
{
struct aes_state *as = ks->ks_priv;
struct cryptop *crp;
- struct cryptodesc *crd;
int error;
- crp = crypto_getreq(1);
- crd = crp->crp_desc;
-
- crd->crd_skip = skip;
- crd->crd_len = inlen;
- crd->crd_inject = skip + inlen;
- crd->crd_flags = 0;
- crd->crd_next = NULL;
- crd->crd_alg = CRYPTO_SHA1_HMAC;
-
- crp->crp_session = as->as_session_sha1;
- crp->crp_ilen = inlen;
- crp->crp_olen = 12;
- crp->crp_etype = 0;
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
- crp->crp_buf = (void *) inout;
- crp->crp_opaque = (void *) as;
+ crp = crypto_getreq(as->as_session_sha1, M_WAITOK);
+
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = inlen;
+ crp->crp_digest_start = skip + inlen;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
+ crp->crp_mbuf = inout;
+ crp->crp_opaque = as;
crp->crp_callback = aes_crypto_cb;
error = crypto_dispatch(crp);
diff --git a/sys/kgssapi/krb5/kcrypto_des.c b/sys/kgssapi/krb5/kcrypto_des.c
index 65dbed5b66b3..391905dad50f 100644
--- a/sys/kgssapi/krb5/kcrypto_des.c
+++ b/sys/kgssapi/krb5/kcrypto_des.c
@@ -78,25 +78,24 @@ des1_destroy(struct krb5_key_state *ks)
static void
des1_set_key(struct krb5_key_state *ks, const void *in)
{
+ struct crypto_session_params csp;
void *kp = ks->ks_key;
struct des1_state *ds = ks->ks_priv;
- struct cryptoini cri[1];
-
- if (kp != in)
- bcopy(in, kp, ks->ks_class->ec_keylen);
if (ds->ds_session)
crypto_freesession(ds->ds_session);
- bzero(cri, sizeof(cri));
+ if (kp != in)
+ bcopy(in, kp, ks->ks_class->ec_keylen);
- cri[0].cri_alg = CRYPTO_DES_CBC;
- cri[0].cri_klen = 64;
- cri[0].cri_mlen = 0;
- cri[0].cri_key = ks->ks_key;
- cri[0].cri_next = NULL;
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_CIPHER;
+ csp.csp_ivlen = 8;
+ csp.csp_cipher_alg = CRYPTO_DES_CBC;
+ csp.csp_cipher_klen = 8;
+ csp.csp_cipher_key = ks->ks_key;
- crypto_newsession(&ds->ds_session, cri,
+ crypto_newsession(&ds->ds_session, &csp,
CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE);
}
@@ -163,32 +162,27 @@ des1_crypto_cb(struct cryptop *crp)
}
static void
-des1_encrypt_1(const struct krb5_key_state *ks, int buftype, void *buf,
- size_t skip, size_t len, void *ivec, int encdec)
+des1_encrypt_1(const struct krb5_key_state *ks, int buf_type, void *buf,
+ size_t skip, size_t len, void *ivec, bool encrypt)
{
struct des1_state *ds = ks->ks_priv;
struct cryptop *crp;
- struct cryptodesc *crd;
int error;
- crp = crypto_getreq(1);
- crd = crp->crp_desc;
+ crp = crypto_getreq(ds->ds_session, M_WAITOK);
- crd->crd_skip = skip;
- crd->crd_len = len;
- crd->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT | encdec;
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = len;
+ crp->crp_op = encrypt ? CRYPTO_OP_ENCRYPT : CRYPTO_OP_DECRYPT;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC | CRYPTO_F_IV_SEPARATE;
if (ivec) {
- bcopy(ivec, crd->crd_iv, 8);
+ memcpy(crp->crp_iv, ivec, 8);
} else {
- bzero(crd->crd_iv, 8);
+ memset(crp->crp_iv, 0, 8);
}
- crd->crd_next = NULL;
- crd->crd_alg = CRYPTO_DES_CBC;
-
- crp->crp_session = ds->ds_session;
- crp->crp_flags = buftype | CRYPTO_F_CBIFSYNC;
+ crp->crp_buf_type = buf_type;
crp->crp_buf = buf;
- crp->crp_opaque = (void *) ds;
+ crp->crp_opaque = ds;
crp->crp_callback = des1_crypto_cb;
error = crypto_dispatch(crp);
@@ -208,8 +202,7 @@ des1_encrypt(const struct krb5_key_state *ks, struct mbuf *inout,
size_t skip, size_t len, void *ivec, size_t ivlen)
{
- des1_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec,
- CRD_F_ENCRYPT);
+ des1_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec, true);
}
static void
@@ -217,7 +210,7 @@ des1_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
size_t skip, size_t len, void *ivec, size_t ivlen)
{
- des1_encrypt_1(ks, CRYPTO_F_IMBUF, inout, skip, len, ivec, 0);
+ des1_encrypt_1(ks, CRYPTO_BUF_MBUF, inout, skip, len, ivec, false);
}
static int
@@ -244,7 +237,7 @@ des1_checksum(const struct krb5_key_state *ks, int usage,
m_apply(inout, skip, inlen, MD5Update_int, &md5);
MD5Final(hash, &md5);
- des1_encrypt_1(ks, 0, hash, 0, 16, NULL, CRD_F_ENCRYPT);
+ des1_encrypt_1(ks, CRYPTO_BUF_CONTIG, hash, 0, 16, NULL, true);
m_copyback(inout, skip + inlen, outlen, hash + 8);
}
diff --git a/sys/kgssapi/krb5/kcrypto_des3.c b/sys/kgssapi/krb5/kcrypto_des3.c
index 1038908d6650..0055b24cdbdf 100644
--- a/sys/kgssapi/krb5/kcrypto_des3.c
+++ b/sys/kgssapi/krb5/kcrypto_des3.c
@@ -48,7 +48,8 @@ __FBSDID("$FreeBSD$");
struct des3_state {
struct mtx ds_lock;
- crypto_session_t ds_session;
+ crypto_session_t ds_cipher_session;
+ crypto_session_t ds_hmac_session;
};
static void
@@ -69,8 +70,10 @@ des3_destroy(struct krb5_key_state *ks)
{
struct des3_state *ds = ks->ks_priv;
- if (ds->ds_session)
- crypto_freesession(ds->ds_session);
+ if (ds->ds_cipher_session) {
+ crypto_freesession(ds->ds_cipher_session);
+ crypto_freesession(ds->ds_hmac_session);
+ }
mtx_destroy(&ds->ds_lock);
free(ks->ks_priv, M_GSSAPI);
}
@@ -78,31 +81,35 @@ des3_destroy(struct krb5_key_state *ks)
static void
des3_set_key(struct krb5_key_state *ks, const void *in)
{
+ struct crypto_session_params csp;
void *kp = ks->ks_key;
struct des3_state *ds = ks->ks_priv;
- struct cryptoini cri[2];
+
+ if (ds->ds_cipher_session) {
+ crypto_freesession(ds->ds_cipher_session);
+ crypto_freesession(ds->ds_hmac_session);
+ }
if (kp != in)
bcopy(in, kp, ks->ks_class->ec_keylen);
- if (ds->ds_session)
- crypto_freesession(ds->ds_session);
-
- bzero(cri, sizeof(cri));
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_DIGEST;
+ csp.csp_auth_alg = CRYPTO_SHA1_HMAC;
+ csp.csp_auth_klen = 24;
+ csp.csp_auth_key = ks->ks_key;
- cri[0].cri_alg = CRYPTO_SHA1_HMAC;
- cri[0].cri_klen = 192;
- cri[0].cri_mlen = 0;
- cri[0].cri_key = ks->ks_key;
- cri[0].cri_next = &cri[1];
+ crypto_newsession(&ds->ds_hmac_session, &csp,
+ CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE);
- cri[1].cri_alg = CRYPTO_3DES_CBC;
- cri[1].cri_klen = 192;
- cri[1].cri_mlen = 0;
- cri[1].cri_key = ks->ks_key;
- cri[1].cri_next = NULL;
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_CIPHER;
+ csp.csp_cipher_alg = CRYPTO_3DES_CBC;
+ csp.csp_cipher_klen = 24;
+ csp.csp_cipher_key = ks->ks_key;
+ csp.csp_ivlen = 8;
- crypto_newsession(&ds->ds_session, cri,
+ crypto_newsession(&ds->ds_cipher_session, &csp,
CRYPTOCAP_F_HARDWARE | CRYPTOCAP_F_SOFTWARE);
}
@@ -158,7 +165,7 @@ des3_crypto_cb(struct cryptop *crp)
int error;
struct des3_state *ds = (struct des3_state *) crp->crp_opaque;
- if (crypto_ses2caps(ds->ds_session) & CRYPTOCAP_F_SYNC)
+ if (crypto_ses2caps(crp->crp_session) & CRYPTOCAP_F_SYNC)
return (0);
error = crp->crp_etype;
@@ -174,36 +181,31 @@ des3_crypto_cb(struct cryptop *crp)
static void
des3_encrypt_1(const struct krb5_key_state *ks, struct mbuf *inout,
- size_t skip, size_t len, void *ivec, int encdec)
+ size_t skip, size_t len, void *ivec, bool encrypt)
{
struct des3_state *ds = ks->ks_priv;
struct cryptop *crp;
- struct cryptodesc *crd;
int error;
- crp = crypto_getreq(1);
- crd = crp->crp_desc;
+ crp = crypto_getreq(ds->ds_cipher_session, M_WAITOK);
- crd->crd_skip = skip;
- crd->crd_len = len;
- crd->crd_flags = CRD_F_IV_EXPLICIT | CRD_F_IV_PRESENT | encdec;
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = len;
+ crp->crp_op = encrypt ? CRYPTO_OP_ENCRYPT : CRYPTO_OP_DECRYPT;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC | CRYPTO_F_IV_SEPARATE;
if (ivec) {
- bcopy(ivec, crd->crd_iv, 8);
+ memcpy(crp->crp_iv, ivec, 8);
} else {
- bzero(crd->crd_iv, 8);
+ memset(crp->crp_iv, 0, 8);
}
- crd->crd_next = NULL;
- crd->crd_alg = CRYPTO_3DES_CBC;
-
- crp->crp_session = ds->ds_session;
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
- crp->crp_buf = (void *) inout;
- crp->crp_opaque = (void *) ds;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
+ crp->crp_mbuf = inout;
+ crp->crp_opaque = ds;
crp->crp_callback = des3_crypto_cb;
error = crypto_dispatch(crp);
- if ((crypto_ses2caps(ds->ds_session) & CRYPTOCAP_F_SYNC) == 0) {
+ if ((crypto_ses2caps(ds->ds_cipher_session) & CRYPTOCAP_F_SYNC) == 0) {
mtx_lock(&ds->ds_lock);
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
error = msleep(crp, &ds->ds_lock, 0, "gssdes3", 0);
@@ -218,7 +220,7 @@ des3_encrypt(const struct krb5_key_state *ks, struct mbuf *inout,
size_t skip, size_t len, void *ivec, size_t ivlen)
{
- des3_encrypt_1(ks, inout, skip, len, ivec, CRD_F_ENCRYPT);
+ des3_encrypt_1(ks, inout, skip, len, ivec, true);
}
static void
@@ -226,7 +228,7 @@ des3_decrypt(const struct krb5_key_state *ks, struct mbuf *inout,
size_t skip, size_t len, void *ivec, size_t ivlen)
{
- des3_encrypt_1(ks, inout, skip, len, ivec, 0);
+ des3_encrypt_1(ks, inout, skip, len, ivec, false);
}
static void
@@ -235,31 +237,23 @@ des3_checksum(const struct krb5_key_state *ks, int usage,
{
struct des3_state *ds = ks->ks_priv;
struct cryptop *crp;
- struct cryptodesc *crd;
int error;
- crp = crypto_getreq(1);
- crd = crp->crp_desc;
-
- crd->crd_skip = skip;
- crd->crd_len = inlen;
- crd->crd_inject = skip + inlen;
- crd->crd_flags = 0;
- crd->crd_next = NULL;
- crd->crd_alg = CRYPTO_SHA1_HMAC;
-
- crp->crp_session = ds->ds_session;
- crp->crp_ilen = inlen;
- crp->crp_olen = 20;
- crp->crp_etype = 0;
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
- crp->crp_buf = (void *) inout;
- crp->crp_opaque = (void *) ds;
+ crp = crypto_getreq(ds->ds_hmac_session, M_WAITOK);
+
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = inlen;
+ crp->crp_digest_start = skip + inlen;
+ crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
+ crp->crp_mbuf = inout;
+ crp->crp_opaque = ds;
crp->crp_callback = des3_crypto_cb;
error = crypto_dispatch(crp);
- if ((crypto_ses2caps(ds->ds_session) & CRYPTOCAP_F_SYNC) == 0) {
+ if ((crypto_ses2caps(ds->ds_hmac_session) & CRYPTOCAP_F_SYNC) == 0) {
mtx_lock(&ds->ds_lock);
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
error = msleep(crp, &ds->ds_lock, 0, "gssdes3", 0);
diff --git a/sys/mips/cavium/cryptocteon/cavium_crypto.c b/sys/mips/cavium/cryptocteon/cavium_crypto.c
index 6ff0e6f58440..e68a2757b466 100644
--- a/sys/mips/cavium/cryptocteon/cavium_crypto.c
+++ b/sys/mips/cavium/cryptocteon/cavium_crypto.c
@@ -328,7 +328,7 @@ octo_des_cbc_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
uint64_t *data;
int data_i, data_l;
@@ -339,8 +339,8 @@ octo_des_cbc_encrypt(
(crypt_off & 0x7) || (crypt_off + crypt_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -387,7 +387,7 @@ octo_des_cbc_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
uint64_t *data;
int data_i, data_l;
@@ -398,8 +398,8 @@ octo_des_cbc_decrypt(
(crypt_off & 0x7) || (crypt_off + crypt_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -447,7 +447,7 @@ octo_aes_cbc_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
uint64_t *data, *pdata;
int data_i, data_l;
@@ -458,8 +458,8 @@ octo_aes_cbc_encrypt(
(crypt_off & 0x7) || (crypt_off + crypt_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -516,7 +516,7 @@ octo_aes_cbc_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
uint64_t *data, *pdata;
int data_i, data_l;
@@ -527,8 +527,8 @@ octo_aes_cbc_decrypt(
(crypt_off & 0x7) || (crypt_off + crypt_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -587,7 +587,7 @@ octo_null_md5_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
uint64_t *data;
@@ -600,8 +600,8 @@ octo_null_md5_encrypt(
(auth_off & 0x7) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -667,13 +667,9 @@ octo_null_md5_encrypt(
CVMX_MT_HSH_STARTMD5(tmp1);
/* save the HMAC */
- IOV_INIT(iov, data, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data, data_i, data_l);
- icv_off -= 8;
- }
+ data = (uint64_t *)icv;
CVMX_MF_HSH_IV(*data, 0);
- IOV_CONSUME(iov, data, data_i, data_l);
+ data++;
CVMX_MF_HSH_IV(tmp1, 1);
*(uint32_t *)data = (uint32_t) (tmp1 >> 32);
@@ -689,7 +685,7 @@ octo_null_sha1_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
uint64_t *data;
@@ -702,8 +698,8 @@ octo_null_sha1_encrypt(
(auth_off & 0x7) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -772,13 +768,9 @@ octo_null_sha1_encrypt(
CVMX_MT_HSH_STARTSHA((uint64_t) ((64 + 20) << 3));
/* save the HMAC */
- IOV_INIT(iov, data, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data, data_i, data_l);
- icv_off -= 8;
- }
+ data = (uint64_t *)icv;
CVMX_MF_HSH_IV(*data, 0);
- IOV_CONSUME(iov, data, data_i, data_l);
+ data++;
CVMX_MF_HSH_IV(tmp1, 1);
*(uint32_t *)data = (uint32_t) (tmp1 >> 32);
@@ -794,7 +786,7 @@ octo_des_cbc_md5_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -815,8 +807,8 @@ octo_des_cbc_md5_encrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -920,16 +912,12 @@ octo_des_cbc_md5_encrypt(
CVMX_MT_HSH_STARTMD5(tmp1);
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -942,7 +930,7 @@ octo_des_cbc_md5_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -963,8 +951,8 @@ octo_des_cbc_md5_decrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1068,16 +1056,12 @@ octo_des_cbc_md5_decrypt(
CVMX_MT_HSH_STARTMD5(tmp1);
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1093,7 +1077,7 @@ octo_des_cbc_sha1_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1114,8 +1098,8 @@ octo_des_cbc_sha1_encrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1222,16 +1206,12 @@ octo_des_cbc_sha1_encrypt(
CVMX_MT_HSH_STARTSHA((uint64_t) ((64 + 20) << 3));
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1244,7 +1224,7 @@ octo_des_cbc_sha1_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1265,8 +1245,8 @@ octo_des_cbc_sha1_decrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1372,16 +1352,12 @@ octo_des_cbc_sha1_decrypt(
CVMX_MT_HSH_DATZ(6);
CVMX_MT_HSH_STARTSHA((uint64_t) ((64 + 20) << 3));
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1397,7 +1373,7 @@ octo_aes_cbc_md5_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1419,8 +1395,8 @@ octo_aes_cbc_md5_encrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1552,16 +1528,12 @@ octo_aes_cbc_md5_encrypt(
CVMX_MT_HSH_STARTMD5(tmp1);
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1574,7 +1546,7 @@ octo_aes_cbc_md5_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1596,8 +1568,8 @@ octo_aes_cbc_md5_decrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1725,16 +1697,12 @@ octo_aes_cbc_md5_decrypt(
CVMX_MT_HSH_STARTMD5(tmp1);
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1750,7 +1718,7 @@ octo_aes_cbc_sha1_encrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1772,8 +1740,8 @@ octo_aes_cbc_sha1_encrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -1924,16 +1892,12 @@ octo_aes_cbc_sha1_encrypt(
#endif
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
@@ -1946,7 +1910,7 @@ octo_aes_cbc_sha1_decrypt(
struct iovec *iov, size_t iovcnt, size_t iovlen,
int auth_off, int auth_len,
int crypt_off, int crypt_len,
- int icv_off, uint8_t *ivp)
+ uint8_t *icv, uint8_t *ivp)
{
int next = 0;
union {
@@ -1968,8 +1932,8 @@ octo_aes_cbc_sha1_decrypt(
(auth_off & 0x3) || (auth_off + auth_len > iovlen))) {
dprintf("%s: Bad parameters od=%p iov=%p iovlen=%jd "
"auth_off=%d auth_len=%d crypt_off=%d crypt_len=%d "
- "icv_off=%d ivp=%p\n", __func__, od, iov, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ "icv=%p ivp=%p\n", __func__, od, iov, iovlen,
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
return -EINVAL;
}
@@ -2119,16 +2083,12 @@ octo_aes_cbc_sha1_decrypt(
#endif
/* save the HMAC */
- IOV_INIT(iov, data32, data_i, data_l);
- while (icv_off > 0) {
- IOV_CONSUME(iov, data32, data_i, data_l);
- icv_off -= 4;
- }
+ data32 = (uint32_t *)icv;
CVMX_MF_HSH_IV(tmp1, 0);
*data32 = (uint32_t) (tmp1 >> 32);
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
*data32 = (uint32_t) tmp1;
- IOV_CONSUME(iov, data32, data_i, data_l);
+ data32++;
CVMX_MF_HSH_IV(tmp1, 1);
*data32 = (uint32_t) (tmp1 >> 32);
diff --git a/sys/mips/cavium/cryptocteon/cryptocteon.c b/sys/mips/cavium/cryptocteon/cryptocteon.c
index d79394054a92..2e1535fd2308 100644
--- a/sys/mips/cavium/cryptocteon/cryptocteon.c
+++ b/sys/mips/cavium/cryptocteon/cryptocteon.c
@@ -59,7 +59,10 @@ static int cryptocteon_probe(device_t);
static int cryptocteon_attach(device_t);
static int cryptocteon_process(device_t, struct cryptop *, int);
-static int cryptocteon_newsession(device_t, crypto_session_t, struct cryptoini *);
+static int cryptocteon_probesession(device_t,
+ const struct crypto_session_params *);
+static int cryptocteon_newsession(device_t, crypto_session_t,
+ const struct crypto_session_params *);
static void
cryptocteon_identify(driver_t *drv, device_t parent)
@@ -89,168 +92,187 @@ cryptocteon_attach(device_t dev)
return (ENXIO);
}
- crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0);
- crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0);
-
return (0);
}
-/*
- * Generate a new octo session. We artifically limit it to a single
- * hash/cipher or hash-cipher combo just to make it easier, most callers
- * do not expect more than this anyway.
- */
-static int
-cryptocteon_newsession(device_t dev, crypto_session_t cses,
- struct cryptoini *cri)
+static bool
+cryptocteon_auth_supported(const struct crypto_session_params *csp)
{
- struct cryptoini *c, *encini = NULL, *macini = NULL;
- struct cryptocteon_softc *sc;
- struct octo_sess *ocd;
- int i;
+ u_int hash_len;
- sc = device_get_softc(dev);
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
+ hash_len = MD5_HASH_LEN;
+ break;
+ case CRYPTO_SHA1_HMAC:
+ hash_len = SHA1_HASH_LEN;
+ break;
+ default:
+ return (false);
+ }
- if (cri == NULL || sc == NULL)
- return (EINVAL);
+ if (csp->csp_auth_klen > hash_len)
+ return (false);
+ return (true);
+}
- /*
- * To keep it simple, we only handle hash, cipher or hash/cipher in a
- * session, you cannot currently do multiple ciphers/hashes in one
- * session even though it would be possibel to code this driver to
- * handle it.
- */
- for (i = 0, c = cri; c && i < 2; i++) {
- if (c->cri_alg == CRYPTO_MD5_HMAC ||
- c->cri_alg == CRYPTO_SHA1_HMAC ||
- c->cri_alg == CRYPTO_NULL_HMAC) {
- if (macini) {
- break;
- }
- macini = c;
- }
- if (c->cri_alg == CRYPTO_DES_CBC ||
- c->cri_alg == CRYPTO_3DES_CBC ||
- c->cri_alg == CRYPTO_AES_CBC ||
- c->cri_alg == CRYPTO_NULL_CBC) {
- if (encini) {
- break;
- }
- encini = c;
- }
- c = c->cri_next;
- }
- if (!macini && !encini) {
- dprintf("%s,%d - EINVAL bad cipher/hash or combination\n",
- __FILE__, __LINE__);
- return EINVAL;
- }
- if (c) {
- dprintf("%s,%d - EINVAL cannot handle chained cipher/hash combos\n",
- __FILE__, __LINE__);
- return EINVAL;
+static bool
+cryptocteon_cipher_supported(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ if (csp->csp_ivlen != 8)
+ return (false);
+ if (csp->csp_cipher_klen != 8 &&
+ csp->csp_cipher_klen != 24)
+ return (false);
+ break;
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != 16)
+ return (false);
+ if (csp->csp_cipher_klen != 16 &&
+ csp->csp_cipher_klen != 24 &&
+ csp->csp_cipher_klen != 32)
+ return (false);
+ break;
+ default:
+ return (false);
}
- /*
- * So we have something we can do, lets setup the session
- */
- ocd = crypto_get_driver_session(cses);
+ return (true);
+}
- if (encini && encini->cri_key) {
- ocd->octo_encklen = (encini->cri_klen + 7) / 8;
- memcpy(ocd->octo_enckey, encini->cri_key, ocd->octo_encklen);
- }
+static int
+cryptocteon_probesession(device_t dev, const struct crypto_session_params *csp)
+{
- if (macini && macini->cri_key) {
- ocd->octo_macklen = (macini->cri_klen + 7) / 8;
- memcpy(ocd->octo_mackey, macini->cri_key, ocd->octo_macklen);
+ if (csp->csp_flags != 0)
+ return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!cryptocteon_auth_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!cryptocteon_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!cryptocteon_auth_supported(csp) ||
+ !cryptocteon_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ default:
+ return (EINVAL);
}
+ return (CRYPTODEV_PROBE_ACCEL_SOFTWARE);
+}
- ocd->octo_mlen = 0;
- if (encini && encini->cri_mlen)
- ocd->octo_mlen = encini->cri_mlen;
- else if (macini && macini->cri_mlen)
- ocd->octo_mlen = macini->cri_mlen;
- else
- ocd->octo_mlen = 12;
+static void
+cryptocteon_calc_hash(const struct crypto_session_params *csp, const char *key,
+ struct octo_sess *ocd)
+{
+ char hash_key[SHA1_HASH_LEN];
- /*
- * point c at the enc if it exists, otherwise the mac
- */
- c = encini ? encini : macini;
+ memset(hash_key, 0, sizeof(hash_key));
+ memcpy(hash_key, key, csp->csp_auth_klen);
+ octo_calc_hash(csp->csp_auth_alg == CRYPTO_SHA1_HMAC, hash_key,
+ ocd->octo_hminner, ocd->octo_hmouter);
+}
- switch (c->cri_alg) {
- case CRYPTO_DES_CBC:
- case CRYPTO_3DES_CBC:
- ocd->octo_ivsize = 8;
- switch (macini ? macini->cri_alg : -1) {
+/* Generate a new octo session. */
+static int
+cryptocteon_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct cryptocteon_softc *sc;
+ struct octo_sess *ocd;
+
+ sc = device_get_softc(dev);
+
+ ocd = crypto_get_driver_session(cses);
+
+ ocd->octo_encklen = csp->csp_cipher_klen;
+ if (csp->csp_cipher_key != NULL)
+ memcpy(ocd->octo_enckey, csp->csp_cipher_key,
+ ocd->octo_encklen);
+
+ if (csp->csp_auth_key != NULL)
+ cryptocteon_calc_hash(csp, csp->csp_auth_key, ocd);
+
+ ocd->octo_mlen = csp->csp_auth_mlen;
+ if (csp->csp_auth_mlen == 0) {
+ switch (csp->csp_auth_alg) {
case CRYPTO_MD5_HMAC:
- ocd->octo_encrypt = octo_des_cbc_md5_encrypt;
- ocd->octo_decrypt = octo_des_cbc_md5_decrypt;
- octo_calc_hash(0, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
+ ocd->octo_mlen = MD5_HASH_LEN;
break;
case CRYPTO_SHA1_HMAC:
- ocd->octo_encrypt = octo_des_cbc_sha1_encrypt;
- ocd->octo_decrypt = octo_des_cbc_sha1_encrypt;
- octo_calc_hash(1, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
- break;
- case -1:
- ocd->octo_encrypt = octo_des_cbc_encrypt;
- ocd->octo_decrypt = octo_des_cbc_decrypt;
+ ocd->octo_mlen = SHA1_HASH_LEN;
break;
- default:
- dprintf("%s,%d: EINVALn", __FILE__, __LINE__);
- return EINVAL;
}
- break;
- case CRYPTO_AES_CBC:
- ocd->octo_ivsize = 16;
- switch (macini ? macini->cri_alg : -1) {
+ }
+
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ switch (csp->csp_auth_alg) {
case CRYPTO_MD5_HMAC:
- ocd->octo_encrypt = octo_aes_cbc_md5_encrypt;
- ocd->octo_decrypt = octo_aes_cbc_md5_decrypt;
- octo_calc_hash(0, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
+ ocd->octo_encrypt = octo_null_md5_encrypt;
+ ocd->octo_decrypt = octo_null_md5_encrypt;
break;
case CRYPTO_SHA1_HMAC:
- ocd->octo_encrypt = octo_aes_cbc_sha1_encrypt;
- ocd->octo_decrypt = octo_aes_cbc_sha1_decrypt;
- octo_calc_hash(1, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
+ ocd->octo_encrypt = octo_null_sha1_encrypt;
+ ocd->octo_decrypt = octo_null_sha1_encrypt;
+ break;
+ }
+ break;
+ case CSP_MODE_CIPHER:
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ ocd->octo_encrypt = octo_des_cbc_encrypt;
+ ocd->octo_decrypt = octo_des_cbc_decrypt;
break;
- case -1:
+ case CRYPTO_AES_CBC:
ocd->octo_encrypt = octo_aes_cbc_encrypt;
ocd->octo_decrypt = octo_aes_cbc_decrypt;
break;
- default:
- dprintf("%s,%d: EINVALn", __FILE__, __LINE__);
- return EINVAL;
}
break;
- case CRYPTO_MD5_HMAC:
- ocd->octo_encrypt = octo_null_md5_encrypt;
- ocd->octo_decrypt = octo_null_md5_encrypt;
- octo_calc_hash(0, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
- break;
- case CRYPTO_SHA1_HMAC:
- ocd->octo_encrypt = octo_null_sha1_encrypt;
- ocd->octo_decrypt = octo_null_sha1_encrypt;
- octo_calc_hash(1, macini->cri_key, ocd->octo_hminner,
- ocd->octo_hmouter);
+ case CSP_MODE_ETA:
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
+ ocd->octo_encrypt = octo_des_cbc_md5_encrypt;
+ ocd->octo_decrypt = octo_des_cbc_md5_decrypt;
+ break;
+ case CRYPTO_SHA1_HMAC:
+ ocd->octo_encrypt = octo_des_cbc_sha1_encrypt;
+ ocd->octo_decrypt = octo_des_cbc_sha1_encrypt;
+ break;
+ }
+ break;
+ case CRYPTO_AES_CBC:
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5_HMAC:
+ ocd->octo_encrypt = octo_aes_cbc_md5_encrypt;
+ ocd->octo_decrypt = octo_aes_cbc_md5_decrypt;
+ break;
+ case CRYPTO_SHA1_HMAC:
+ ocd->octo_encrypt = octo_aes_cbc_sha1_encrypt;
+ ocd->octo_decrypt = octo_aes_cbc_sha1_decrypt;
+ break;
+ }
+ break;
+ }
break;
- default:
- dprintf("%s,%d: EINVALn", __FILE__, __LINE__);
- return EINVAL;
}
- ocd->octo_encalg = encini ? encini->cri_alg : -1;
- ocd->octo_macalg = macini ? macini->cri_alg : -1;
+ KASSERT(ocd->octo_encrypt != NULL && ocd->octo_decrypt != NULL,
+ ("%s: missing function pointers", __func__));
return (0);
}
@@ -261,106 +283,107 @@ cryptocteon_newsession(device_t dev, crypto_session_t cses,
static int
cryptocteon_process(device_t dev, struct cryptop *crp, int hint)
{
- struct cryptodesc *crd;
+ const struct crypto_session_params *csp;
struct octo_sess *od;
size_t iovcnt, iovlen;
struct mbuf *m = NULL;
struct uio *uiop = NULL;
- struct cryptodesc *enccrd = NULL, *maccrd = NULL;
unsigned char *ivp = NULL;
- unsigned char iv_data[HASH_MAX_LEN];
- int auth_off = 0, auth_len = 0, crypt_off = 0, crypt_len = 0, icv_off = 0;
+ unsigned char iv_data[16];
+ unsigned char icv[SHA1_HASH_LEN], icv2[SHA1_HASH_LEN];
+ int auth_off, auth_len, crypt_off, crypt_len;
struct cryptocteon_softc *sc;
sc = device_get_softc(dev);
- if (sc == NULL || crp == NULL)
- return EINVAL;
-
crp->crp_etype = 0;
- if (crp->crp_desc == NULL || crp->crp_buf == NULL) {
- dprintf("%s,%d: EINVAL\n", __FILE__, __LINE__);
- crp->crp_etype = EINVAL;
+ od = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
+
+ /*
+ * The crypto routines assume that the regions to auth and
+ * cipher are exactly 8 byte multiples and aligned on 8
+ * byte logical boundaries within the iovecs.
+ */
+ if (crp->crp_aad_length % 8 != 0 || crp->crp_payload_length % 8 != 0) {
+ crp->crp_etype = EFBIG;
+ goto done;
+ }
+
+ /*
+ * As currently written, the crypto routines assume the AAD and
+ * payload are adjacent.
+ */
+ if (crp->crp_aad_length != 0 && crp->crp_payload_start !=
+ crp->crp_aad_start + crp->crp_aad_length) {
+ crp->crp_etype = EFBIG;
goto done;
}
- od = crypto_get_driver_session(crp->crp_session);
+ crypt_off = crp->crp_payload_start;
+ crypt_len = crp->crp_payload_length;
+ if (crp->crp_aad_length != 0) {
+ auth_off = crp->crp_aad_start;
+ auth_len = crp->crp_aad_length + crp->crp_payload_length;
+ } else {
+ auth_off = crypt_off;
+ auth_len = crypt_len;
+ }
/*
* do some error checking outside of the loop for m and IOV processing
* this leaves us with valid m or uiop pointers for later
*/
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ {
unsigned frags;
- m = (struct mbuf *) crp->crp_buf;
+ m = crp->crp_mbuf;
for (frags = 0; m != NULL; frags++)
m = m->m_next;
if (frags >= UIO_MAXIOV) {
printf("%s,%d: %d frags > UIO_MAXIOV", __FILE__, __LINE__, frags);
+ crp->crp_etype = EFBIG;
goto done;
}
- m = (struct mbuf *) crp->crp_buf;
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
- uiop = (struct uio *) crp->crp_buf;
+ m = crp->crp_mbuf;
+ break;
+ }
+ case CRYPTO_BUF_UIO:
+ uiop = crp->crp_uio;
if (uiop->uio_iovcnt > UIO_MAXIOV) {
printf("%s,%d: %d uio_iovcnt > UIO_MAXIOV", __FILE__, __LINE__,
uiop->uio_iovcnt);
+ crp->crp_etype = EFBIG;
goto done;
}
+ break;
}
- /* point our enccrd and maccrd appropriately */
- crd = crp->crp_desc;
- if (crd->crd_alg == od->octo_encalg)
- enccrd = crd;
- if (crd->crd_alg == od->octo_macalg)
- maccrd = crd;
- crd = crd->crd_next;
- if (crd) {
- if (crd->crd_alg == od->octo_encalg)
- enccrd = crd;
- if (crd->crd_alg == od->octo_macalg)
- maccrd = crd;
- crd = crd->crd_next;
- }
- if (crd) {
- crp->crp_etype = EINVAL;
- dprintf("%s,%d: ENOENT - descriptors do not match session\n",
- __FILE__, __LINE__);
- goto done;
- }
-
- if (enccrd) {
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT) {
- ivp = enccrd->crd_iv;
- } else {
+ if (csp->csp_cipher_alg != 0) {
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(iv_data, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen,
+ iv_data);
+ ivp = iv_data;
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ ivp = crp->crp_iv;
+ else {
+ crypto_copydata(crp, crp->crp_iv_start, csp->csp_ivlen,
+ iv_data);
ivp = iv_data;
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, od->octo_ivsize, (caddr_t) ivp);
- }
-
- if (maccrd) {
- auth_off = maccrd->crd_skip;
- auth_len = maccrd->crd_len;
- icv_off = maccrd->crd_inject;
}
-
- crypt_off = enccrd->crd_skip;
- crypt_len = enccrd->crd_len;
- } else { /* if (maccrd) */
- auth_off = maccrd->crd_skip;
- auth_len = maccrd->crd_len;
- icv_off = maccrd->crd_inject;
}
/*
* setup the I/O vector to cover the buffer
*/
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
iovcnt = 0;
iovlen = 0;
@@ -371,7 +394,8 @@ cryptocteon_process(device_t dev, struct cryptop *crp, int hint)
m = m->m_next;
iovlen += od->octo_iov[iovcnt++].iov_len;
}
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
+ break;
+ case CRYPTO_BUF_UIO:
iovlen = 0;
for (iovcnt = 0; iovcnt < uiop->uio_iovcnt; iovcnt++) {
od->octo_iov[iovcnt].iov_base = uiop->uio_iov[iovcnt].iov_base;
@@ -379,44 +403,44 @@ cryptocteon_process(device_t dev, struct cryptop *crp, int hint)
iovlen += od->octo_iov[iovcnt].iov_len;
}
- } else {
+ break;
+ case CRYPTO_BUF_CONTIG:
iovlen = crp->crp_ilen;
od->octo_iov[0].iov_base = crp->crp_buf;
od->octo_iov[0].iov_len = crp->crp_ilen;
iovcnt = 1;
+ break;
+ default:
+ panic("can't happen");
}
/*
* setup a new explicit key
*/
- if (enccrd) {
- if (enccrd->crd_flags & CRD_F_KEY_EXPLICIT) {
- od->octo_encklen = (enccrd->crd_klen + 7) / 8;
- memcpy(od->octo_enckey, enccrd->crd_key, od->octo_encklen);
- }
- }
- if (maccrd) {
- if (maccrd->crd_flags & CRD_F_KEY_EXPLICIT) {
- od->octo_macklen = (maccrd->crd_klen + 7) / 8;
- memcpy(od->octo_mackey, maccrd->crd_key, od->octo_macklen);
- od->octo_mackey_set = 0;
- }
- if (!od->octo_mackey_set) {
- octo_calc_hash(maccrd->crd_alg == CRYPTO_MD5_HMAC ? 0 : 1,
- maccrd->crd_key, od->octo_hminner, od->octo_hmouter);
- od->octo_mackey_set = 1;
- }
- }
+ if (crp->crp_cipher_key != NULL)
+ memcpy(od->octo_enckey, crp->crp_cipher_key, od->octo_encklen);
+ if (crp->crp_auth_key != NULL)
+ cryptocteon_calc_hash(csp, crp->crp_auth_key, od);
- if (!enccrd || (enccrd->crd_flags & CRD_F_ENCRYPT))
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
(*od->octo_encrypt)(od, od->octo_iov, iovcnt, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
else
(*od->octo_decrypt)(od, od->octo_iov, iovcnt, iovlen,
- auth_off, auth_len, crypt_off, crypt_len, icv_off, ivp);
-
+ auth_off, auth_len, crypt_off, crypt_len, icv, ivp);
+
+ if (csp->csp_auth_alg != 0) {
+ if (crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(crp, crp->crp_digest_start,
+ od->octo_mlen, icv2);
+ if (timingsafe_bcmp(icv, icv2, od->octo_mlen) != 0)
+ crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(crp, crp->crp_digest_start,
+ od->octo_mlen, icv);
+ }
done:
crypto_done(crp);
return (0);
@@ -429,6 +453,7 @@ static device_method_t cryptocteon_methods[] = {
DEVMETHOD(device_attach, cryptocteon_attach),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, cryptocteon_probesession),
DEVMETHOD(cryptodev_newsession, cryptocteon_newsession),
DEVMETHOD(cryptodev_process, cryptocteon_process),
diff --git a/sys/mips/cavium/cryptocteon/cryptocteonvar.h b/sys/mips/cavium/cryptocteon/cryptocteonvar.h
index e722298d899e..e7bc445deefb 100644
--- a/sys/mips/cavium/cryptocteon/cryptocteonvar.h
+++ b/sys/mips/cavium/cryptocteon/cryptocteonvar.h
@@ -34,23 +34,15 @@
struct octo_sess;
-typedef int octo_encrypt_t(struct octo_sess *od, struct iovec *iov, size_t iovcnt, size_t iovlen, int auth_off, int auth_len, int crypt_off, int crypt_len, int icv_off, uint8_t *ivp);
-typedef int octo_decrypt_t(struct octo_sess *od, struct iovec *iov, size_t iovcnt, size_t iovlen, int auth_off, int auth_len, int crypt_off, int crypt_len, int icv_off, uint8_t *ivp);
+typedef int octo_encrypt_t(struct octo_sess *od, struct iovec *iov, size_t iovcnt, size_t iovlen, int auth_off, int auth_len, int crypt_off, int crypt_len, uint8_t *icv, uint8_t *ivp);
+typedef int octo_decrypt_t(struct octo_sess *od, struct iovec *iov, size_t iovcnt, size_t iovlen, int auth_off, int auth_len, int crypt_off, int crypt_len, uint8_t *icv, uint8_t *ivp);
struct octo_sess {
- int octo_encalg;
#define MAX_CIPHER_KEYLEN 64
char octo_enckey[MAX_CIPHER_KEYLEN];
int octo_encklen;
- int octo_macalg;
- #define MAX_HASH_KEYLEN 64
- char octo_mackey[MAX_HASH_KEYLEN];
- int octo_macklen;
- int octo_mackey_set;
-
int octo_mlen;
- int octo_ivsize;
octo_encrypt_t *octo_encrypt;
octo_decrypt_t *octo_decrypt;
diff --git a/sys/mips/nlm/dev/sec/nlmrsa.c b/sys/mips/nlm/dev/sec/nlmrsa.c
index e0aab68d8f5a..3252ecbed9c9 100644
--- a/sys/mips/nlm/dev/sec/nlmrsa.c
+++ b/sys/mips/nlm/dev/sec/nlmrsa.c
@@ -76,7 +76,6 @@ static void print_krp_params(struct cryptkop *krp);
#endif
static int xlp_rsa_init(struct xlp_rsa_softc *sc, int node);
-static int xlp_rsa_newsession(device_t , crypto_session_t, struct cryptoini *);
static int xlp_rsa_kprocess(device_t , struct cryptkop *, int);
static int xlp_get_rsa_opsize(struct xlp_rsa_command *cmd, unsigned int bits);
static void xlp_free_cmd_params(struct xlp_rsa_command *cmd);
@@ -98,7 +97,6 @@ static device_method_t xlp_rsa_methods[] = {
DEVMETHOD(bus_driver_added, bus_generic_driver_added),
/* crypto device methods */
- DEVMETHOD(cryptodev_newsession, xlp_rsa_newsession),
DEVMETHOD(cryptodev_kprocess, xlp_rsa_kprocess),
DEVMETHOD_END
@@ -314,20 +312,6 @@ xlp_rsa_detach(device_t dev)
}
/*
- * Allocate a new 'session' (unused).
- */
-static int
-xlp_rsa_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
-{
- struct xlp_rsa_softc *sc = device_get_softc(dev);
-
- if (cri == NULL || sc == NULL)
- return (EINVAL);
-
- return (0);
-}
-
-/*
* XXX freesession should run a zero'd mac/encrypt key into context ram.
* XXX to blow away any keys already stored there.
*/
diff --git a/sys/mips/nlm/dev/sec/nlmsec.c b/sys/mips/nlm/dev/sec/nlmsec.c
index 4dd1ad3daffa..092011916c8b 100644
--- a/sys/mips/nlm/dev/sec/nlmsec.c
+++ b/sys/mips/nlm/dev/sec/nlmsec.c
@@ -52,6 +52,7 @@ __FBSDID("$FreeBSD$");
#include <dev/pci/pcivar.h>
#include <opencrypto/cryptodev.h>
+#include <opencrypto/xform_auth.h>
#include "cryptodev_if.h"
@@ -71,13 +72,14 @@ __FBSDID("$FreeBSD$");
unsigned int creditleft;
-void xlp_sec_print_data(struct cryptop *crp);
-
static int xlp_sec_init(struct xlp_sec_softc *sc);
-static int xlp_sec_newsession(device_t , crypto_session_t, struct cryptoini *);
+static int xlp_sec_probesession(device_t,
+ const struct crypto_session_params *);
+static int xlp_sec_newsession(device_t , crypto_session_t,
+ const struct crypto_session_params *);
static int xlp_sec_process(device_t , struct cryptop *, int);
-static int xlp_copyiv(struct xlp_sec_softc *, struct xlp_sec_command *,
- struct cryptodesc *enccrd);
+static void xlp_copyiv(struct xlp_sec_softc *, struct xlp_sec_command *,
+ const struct crypto_session_params *);
static int xlp_get_nsegs(struct cryptop *, unsigned int *);
static int xlp_alloc_cmd_params(struct xlp_sec_command *, unsigned int);
static void xlp_free_cmd_params(struct xlp_sec_command *);
@@ -97,6 +99,7 @@ static device_method_t xlp_sec_methods[] = {
DEVMETHOD(bus_driver_added, bus_generic_driver_added),
/* crypto device methods */
+ DEVMETHOD(cryptodev_probesession, xlp_sec_probesession),
DEVMETHOD(cryptodev_newsession, xlp_sec_newsession),
DEVMETHOD(cryptodev_process, xlp_sec_process),
@@ -198,46 +201,6 @@ print_crypto_params(struct xlp_sec_command *cmd, struct nlm_fmn_msg m)
}
void
-xlp_sec_print_data(struct cryptop *crp)
-{
- int i, key_len;
- struct cryptodesc *crp_desc;
-
- printf("session = %p, crp_ilen = %d, crp_olen=%d \n", crp->crp_session,
- crp->crp_ilen, crp->crp_olen);
-
- printf("crp_flags = 0x%x\n", crp->crp_flags);
-
- printf("crp buf:\n");
- for (i = 0; i < crp->crp_ilen; i++) {
- printf("%c ", crp->crp_buf[i]);
- if (i % 10 == 0)
- printf("\n");
- }
-
- printf("\n");
- printf("****************** desc ****************\n");
- crp_desc = crp->crp_desc;
- printf("crd_skip=%d, crd_len=%d, crd_flags=0x%x, crd_alg=%d\n",
- crp_desc->crd_skip, crp_desc->crd_len, crp_desc->crd_flags,
- crp_desc->crd_alg);
-
- key_len = crp_desc->crd_klen / 8;
- printf("key(%d) :\n", key_len);
- for (i = 0; i < key_len; i++)
- printf("%d", crp_desc->crd_key[i]);
- printf("\n");
-
- printf(" IV : \n");
- for (i = 0; i < EALG_MAX_BLOCK_LEN; i++)
- printf("%d", crp_desc->crd_iv[i]);
- printf("\n");
-
- printf("crd_next=%p\n", crp_desc->crd_next);
- return;
-}
-
-void
print_cmd(struct xlp_sec_command *cmd)
{
printf("session_num :%d\n",cmd->session_num);
@@ -289,8 +252,7 @@ nlm_xlpsec_msgring_handler(int vc, int size, int code, int src_id,
{
struct xlp_sec_command *cmd = NULL;
struct xlp_sec_softc *sc = NULL;
- struct cryptodesc *crd = NULL;
- unsigned int ivlen = 0;
+ uint8_t hash[HASH_MAX_LEN];
KASSERT(code == FMN_SWCODE_CRYPTO,
("%s: bad code = %d, expected code = %d\n", __FUNCTION__,
@@ -310,23 +272,6 @@ nlm_xlpsec_msgring_handler(int vc, int size, int code, int src_id,
(unsigned long long)msg->msg[0], (unsigned long long)msg->msg[1],
(int)CRYPTO_ERROR(msg->msg[1])));
- crd = cmd->enccrd;
- /* Copy the last 8 or 16 bytes to the session iv, so that in few
- * cases this will be used as IV for the next request
- */
- if (crd != NULL) {
- if ((crd->crd_alg == CRYPTO_DES_CBC ||
- crd->crd_alg == CRYPTO_3DES_CBC ||
- crd->crd_alg == CRYPTO_AES_CBC) &&
- (crd->crd_flags & CRD_F_ENCRYPT)) {
- ivlen = ((crd->crd_alg == CRYPTO_AES_CBC) ?
- XLP_SEC_AES_IV_LENGTH : XLP_SEC_DES_IV_LENGTH);
- crypto_copydata(cmd->crp->crp_flags, cmd->crp->crp_buf,
- crd->crd_skip + crd->crd_len - ivlen, ivlen,
- cmd->ses->ses_iv);
- }
- }
-
/* If there are not enough credits to send, then send request
* will fail with ERESTART and the driver will be blocked until it is
* unblocked here after knowing that there are sufficient credits to
@@ -339,10 +284,16 @@ nlm_xlpsec_msgring_handler(int vc, int size, int code, int src_id,
sc->sc_needwakeup &= (~(CRYPTO_SYMQ | CRYPTO_ASYMQ));
}
}
- if(cmd->maccrd) {
- crypto_copyback(cmd->crp->crp_flags,
- cmd->crp->crp_buf, cmd->maccrd->crd_inject,
- cmd->hash_dst_len, cmd->hashdest);
+ if (cmd->hash_dst_len != 0) {
+ if (cmd->crp->crp_op & CRYPTO_OP_VERIFY_DIGEST) {
+ crypto_copydata(cmd->crp, cmd->crp->crp_digest_start,
+ cmd->hash_dst_len, hash);
+ if (timingsafe_bcmp(cmd->hashdest, hash,
+ cmd->hash_dst_len) != 0)
+ cmd->crp->crp_etype = EBADMSG;
+ } else
+ crypto_copyback(cmd->crp, cmd->crp->crp_digest_start,
+ cmd->hash_dst_len, cmd->hashdest);
}
/* This indicates completion of the crypto operation */
@@ -392,29 +343,6 @@ xlp_sec_attach(device_t dev)
" id\n");
goto error_exit;
}
- if (crypto_register(sc->sc_cid, CRYPTO_DES_CBC, 0, 0) != 0)
- printf("register failed for CRYPTO_DES_CBC\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_3DES_CBC, 0, 0) != 0)
- printf("register failed for CRYPTO_3DES_CBC\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_AES_CBC, 0, 0) != 0)
- printf("register failed for CRYPTO_AES_CBC\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_ARC4, 0, 0) != 0)
- printf("register failed for CRYPTO_ARC4\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_MD5, 0, 0) != 0)
- printf("register failed for CRYPTO_MD5\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_SHA1, 0, 0) != 0)
- printf("register failed for CRYPTO_SHA1\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_MD5_HMAC, 0, 0) != 0)
- printf("register failed for CRYPTO_MD5_HMAC\n");
-
- if (crypto_register(sc->sc_cid, CRYPTO_SHA1_HMAC, 0, 0) != 0)
- printf("register failed for CRYPTO_SHA1_HMAC\n");
base = nlm_get_sec_pcibase(node);
qstart = nlm_qidstart(base);
@@ -443,65 +371,88 @@ xlp_sec_detach(device_t dev)
return (0);
}
+static bool
+xlp_sec_auth_supported(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_auth_alg) {
+ case CRYPTO_MD5:
+ case CRYPTO_SHA1:
+ case CRYPTO_MD5_HMAC:
+ case CRYPTO_SHA1_HMAC:
+ break;
+ default:
+ return (false);
+ }
+ return (true);
+}
+
+static bool
+xlp_sec_cipher_supported(const struct crypto_session_params *csp)
+{
+
+ switch (csp->csp_cipher_alg) {
+ case CRYPTO_DES_CBC:
+ case CRYPTO_3DES_CBC:
+ if (csp->csp_ivlen != XLP_SEC_DES_IV_LENGTH)
+ return (false);
+ break;
+ case CRYPTO_AES_CBC:
+ if (csp->csp_ivlen != XLP_SEC_AES_IV_LENGTH)
+ return (false);
+ break;
+ case CRYPTO_ARC4:
+ if (csp->csp_ivlen != XLP_SEC_ARC4_IV_LENGTH)
+ return (false);
+ break;
+ default:
+ return (false);
+ }
+
+ return (true);
+}
+
static int
-xlp_sec_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
+xlp_sec_probesession(device_t dev, const struct crypto_session_params *csp)
{
- struct cryptoini *c;
- struct xlp_sec_softc *sc = device_get_softc(dev);
- int mac = 0, cry = 0;
- struct xlp_sec_session *ses;
- struct xlp_sec_command *cmd = NULL;
- if (cri == NULL || sc == NULL)
+ if (csp->csp_flags != 0)
return (EINVAL);
+ switch (csp->csp_mode) {
+ case CSP_MODE_DIGEST:
+ if (!xlp_sec_auth_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_CIPHER:
+ if (!xlp_sec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ case CSP_MODE_ETA:
+ if (!xlp_sec_auth_supported(csp) ||
+ !xlp_sec_cipher_supported(csp))
+ return (EINVAL);
+ break;
+ default:
+ return (EINVAL);
+ }
+ return (CRYPTODEV_PROBE_HARDWARE);
+}
+
+static int
+xlp_sec_newsession(device_t dev, crypto_session_t cses,
+ const struct crypto_session_params *csp)
+{
+ struct xlp_sec_session *ses;
ses = crypto_get_driver_session(cses);
- cmd = &ses->cmd;
-
- for (c = cri; c != NULL; c = c->cri_next) {
- switch (c->cri_alg) {
- case CRYPTO_MD5:
- case CRYPTO_SHA1:
- case CRYPTO_MD5_HMAC:
- case CRYPTO_SHA1_HMAC:
- if (mac)
- return (EINVAL);
- mac = 1;
- ses->hs_mlen = c->cri_mlen;
- if (ses->hs_mlen == 0) {
- switch (c->cri_alg) {
- case CRYPTO_MD5:
- case CRYPTO_MD5_HMAC:
- ses->hs_mlen = 16;
- break;
- case CRYPTO_SHA1:
- case CRYPTO_SHA1_HMAC:
- ses->hs_mlen = 20;
- break;
- }
- }
- break;
- case CRYPTO_DES_CBC:
- case CRYPTO_3DES_CBC:
- case CRYPTO_AES_CBC:
- /* XXX this may read fewer, does it matter? */
- read_random(ses->ses_iv, c->cri_alg ==
- CRYPTO_AES_CBC ? XLP_SEC_AES_IV_LENGTH :
- XLP_SEC_DES_IV_LENGTH);
- /* FALLTHROUGH */
- case CRYPTO_ARC4:
- if (cry)
- return (EINVAL);
- cry = 1;
- break;
- default:
- return (EINVAL);
- }
+
+ if (csp->csp_auth_alg != 0) {
+ if (csp->csp_auth_mlen == 0)
+ ses->hs_mlen = crypto_auth_hash(csp)->hashsize;
+ else
+ ses->hs_mlen = csp->csp_auth_mlen;
}
- if (mac == 0 && cry == 0)
- return (EINVAL);
- cmd->hash_dst_len = ses->hs_mlen;
return (0);
}
@@ -510,54 +461,42 @@ xlp_sec_newsession(device_t dev, crypto_session_t cses, struct cryptoini *cri)
* ram. to blow away any keys already stored there.
*/
-static int
+static void
xlp_copyiv(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd,
- struct cryptodesc *enccrd)
+ const struct crypto_session_params *csp)
{
- unsigned int ivlen = 0;
struct cryptop *crp = NULL;
crp = cmd->crp;
- if (enccrd->crd_alg != CRYPTO_ARC4) {
- ivlen = ((enccrd->crd_alg == CRYPTO_AES_CBC) ?
- XLP_SEC_AES_IV_LENGTH : XLP_SEC_DES_IV_LENGTH);
- if (enccrd->crd_flags & CRD_F_ENCRYPT) {
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT) {
- bcopy(enccrd->crd_iv, cmd->iv, ivlen);
- } else {
- bcopy(cmd->ses->ses_iv, cmd->iv, ivlen);
- }
- if ((enccrd->crd_flags & CRD_F_IV_PRESENT) == 0) {
- crypto_copyback(crp->crp_flags,
- crp->crp_buf, enccrd->crd_inject,
- ivlen, cmd->iv);
- }
- } else {
- if (enccrd->crd_flags & CRD_F_IV_EXPLICIT) {
- bcopy(enccrd->crd_iv, cmd->iv, ivlen);
- } else {
- crypto_copydata(crp->crp_flags, crp->crp_buf,
- enccrd->crd_inject, ivlen, cmd->iv);
- }
- }
+ if (csp->csp_cipher_alg != CRYPTO_ARC4) {
+ if (crp->crp_flags & CRYPTO_F_IV_GENERATE) {
+ arc4rand(cmd->iv, csp->csp_ivlen, 0);
+ crypto_copyback(crp, crp->crp_iv_start, csp->csp_ivlen,
+ cmd->iv);
+ } else if (crp->crp_flags & CRYPTO_F_IV_SEPARATE)
+ memcpy(cmd->iv, crp->crp_iv, csp->csp_ivlen);
}
- return (0);
}
static int
xlp_get_nsegs(struct cryptop *crp, unsigned int *nsegs)
{
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ {
struct mbuf *m = NULL;
- m = (struct mbuf *)crp->crp_buf;
+ m = crp->crp_mbuf;
while (m != NULL) {
*nsegs += NLM_CRYPTO_NUM_SEGS_REQD(m->m_len);
m = m->m_next;
}
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
+ break;
+ }
+ case CRYPTO_BUF_UIO:
+ {
struct uio *uio = NULL;
struct iovec *iov = NULL;
int iol = 0;
@@ -570,8 +509,13 @@ xlp_get_nsegs(struct cryptop *crp, unsigned int *nsegs)
iol--;
iov++;
}
- } else {
+ break;
+ }
+ case CRYPTO_BUF_CONTIG:
*nsegs = NLM_CRYPTO_NUM_SEGS_REQD(crp->crp_ilen);
+ break;
+ default:
+ return (EINVAL);
}
return (0);
}
@@ -638,20 +582,24 @@ static int
xlp_sec_process(device_t dev, struct cryptop *crp, int hint)
{
struct xlp_sec_softc *sc = device_get_softc(dev);
+ const struct crypto_session_params *csp;
struct xlp_sec_command *cmd = NULL;
int err = -1, ret = 0;
- struct cryptodesc *crd1, *crd2;
struct xlp_sec_session *ses;
unsigned int nsegs = 0;
- if (crp == NULL || crp->crp_callback == NULL) {
- return (EINVAL);
- }
- if (sc == NULL) {
- err = EINVAL;
+ ses = crypto_get_driver_session(crp->crp_session);
+ csp = crypto_get_params(crp->crp_session);
+
+ /*
+ * This device only support AAD requests where the AAD is
+ * adjacent to the payload.
+ */
+ if (crp->crp_aad_length != 0 && crp->crp_payload_start !=
+ crp->crp_aad_start + crp->crp_aad_length) {
+ err = EFBIG;
goto errout;
}
- ses = crypto_get_driver_session(crp->crp_session);
if ((cmd = malloc(sizeof(struct xlp_sec_command), M_DEVBUF,
M_NOWAIT | M_ZERO)) == NULL) {
@@ -663,18 +611,12 @@ xlp_sec_process(device_t dev, struct cryptop *crp, int hint)
cmd->ses = ses;
cmd->hash_dst_len = ses->hs_mlen;
- if ((crd1 = crp->crp_desc) == NULL) {
- err = EINVAL;
- goto errout;
- }
- crd2 = crd1->crd_next;
-
if ((ret = xlp_get_nsegs(crp, &nsegs)) != 0) {
err = EINVAL;
goto errout;
}
- if (((crd1 != NULL) && (crd1->crd_flags & CRD_F_IV_EXPLICIT)) ||
- ((crd2 != NULL) && (crd2->crd_flags & CRD_F_IV_EXPLICIT))) {
+
+ if (crp->crp_flags & CRYPTO_F_IV_SEPARATE) {
/* Since IV is given as separate segment to avoid copy */
nsegs += 1;
}
@@ -683,98 +625,70 @@ xlp_sec_process(device_t dev, struct cryptop *crp, int hint)
if ((err = xlp_alloc_cmd_params(cmd, nsegs)) != 0)
goto errout;
- if ((crd1 != NULL) && (crd2 == NULL)) {
- if (crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC ||
- crd1->crd_alg == CRYPTO_ARC4) {
- cmd->enccrd = crd1;
- cmd->maccrd = NULL;
- if ((ret = nlm_get_cipher_param(cmd)) != 0) {
- err = EINVAL;
- goto errout;
- }
- if (crd1->crd_flags & CRD_F_IV_EXPLICIT)
- cmd->cipheroff = cmd->ivlen;
- else
- cmd->cipheroff = cmd->enccrd->crd_skip;
- cmd->cipherlen = cmd->enccrd->crd_len;
- if (crd1->crd_flags & CRD_F_IV_PRESENT)
- cmd->ivoff = 0;
- else
- cmd->ivoff = cmd->enccrd->crd_inject;
- if ((err = xlp_copyiv(sc, cmd, cmd->enccrd)) != 0)
- goto errout;
- if ((err = nlm_crypto_do_cipher(sc, cmd)) != 0)
- goto errout;
- } else if (crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1 ||
- crd1->crd_alg == CRYPTO_MD5) {
- cmd->enccrd = NULL;
- cmd->maccrd = crd1;
- if ((ret = nlm_get_digest_param(cmd)) != 0) {
- err = EINVAL;
- goto errout;
- }
- cmd->hashoff = cmd->maccrd->crd_skip;
- cmd->hashlen = cmd->maccrd->crd_len;
- cmd->hmacpad = 0;
- cmd->hashsrc = 0;
- if ((err = nlm_crypto_do_digest(sc, cmd)) != 0)
- goto errout;
- } else {
+ switch (csp->csp_mode) {
+ case CSP_MODE_CIPHER:
+ if ((ret = nlm_get_cipher_param(cmd, csp)) != 0) {
err = EINVAL;
goto errout;
}
- } else if( (crd1 != NULL) && (crd2 != NULL) ) {
- if ((crd1->crd_alg == CRYPTO_MD5_HMAC ||
- crd1->crd_alg == CRYPTO_SHA1_HMAC ||
- crd1->crd_alg == CRYPTO_MD5 ||
- crd1->crd_alg == CRYPTO_SHA1) &&
- (crd2->crd_alg == CRYPTO_DES_CBC ||
- crd2->crd_alg == CRYPTO_3DES_CBC ||
- crd2->crd_alg == CRYPTO_AES_CBC ||
- crd2->crd_alg == CRYPTO_ARC4)) {
- cmd->maccrd = crd1;
- cmd->enccrd = crd2;
- } else if ((crd1->crd_alg == CRYPTO_DES_CBC ||
- crd1->crd_alg == CRYPTO_ARC4 ||
- crd1->crd_alg == CRYPTO_3DES_CBC ||
- crd1->crd_alg == CRYPTO_AES_CBC) &&
- (crd2->crd_alg == CRYPTO_MD5_HMAC ||
- crd2->crd_alg == CRYPTO_SHA1_HMAC ||
- crd2->crd_alg == CRYPTO_MD5 ||
- crd2->crd_alg == CRYPTO_SHA1)) {
- cmd->enccrd = crd1;
- cmd->maccrd = crd2;
- } else {
+ cmd->cipheroff = crp->crp_payload_start;
+ cmd->cipherlen = crp->crp_payload_length;
+ if (crp->crp_flags & CRYPTO_F_IV_SEPARATE) {
+ cmd->cipheroff += cmd->ivlen;
+ cmd->ivoff = 0;
+ } else
+ cmd->ivoff = crp->crp_iv_start;
+ xlp_copyiv(sc, cmd, csp);
+ if ((err = nlm_crypto_do_cipher(sc, cmd, csp)) != 0)
+ goto errout;
+ break;
+ case CSP_MODE_DIGEST:
+ if ((ret = nlm_get_digest_param(cmd, csp)) != 0) {
err = EINVAL;
goto errout;
}
- if ((ret = nlm_get_cipher_param(cmd)) != 0) {
+ cmd->hashoff = crp->crp_payload_start;
+ cmd->hashlen = crp->crp_payload_length;
+ cmd->hmacpad = 0;
+ cmd->hashsrc = 0;
+ if ((err = nlm_crypto_do_digest(sc, cmd, csp)) != 0)
+ goto errout;
+ break;
+ case CSP_MODE_ETA:
+ if ((ret = nlm_get_cipher_param(cmd, csp)) != 0) {
err = EINVAL;
goto errout;
}
- if ((ret = nlm_get_digest_param(cmd)) != 0) {
+ if ((ret = nlm_get_digest_param(cmd, csp)) != 0) {
err = EINVAL;
goto errout;
}
- cmd->ivoff = cmd->enccrd->crd_inject;
- cmd->hashoff = cmd->maccrd->crd_skip;
- cmd->hashlen = cmd->maccrd->crd_len;
+ if (crp->crp_aad_length != 0) {
+ cmd->hashoff = crp->crp_aad_start;
+ cmd->hashlen = crp->crp_aad_length +
+ crp->crp_payload_length;
+ } else {
+ cmd->hashoff = crp->crp_payload_start;
+ cmd->hashlen = crp->crp_payload_length;
+ }
cmd->hmacpad = 0;
- if (cmd->enccrd->crd_flags & CRD_F_ENCRYPT)
+ if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op))
cmd->hashsrc = 1;
else
cmd->hashsrc = 0;
- cmd->cipheroff = cmd->enccrd->crd_skip;
- cmd->cipherlen = cmd->enccrd->crd_len;
- if ((err = xlp_copyiv(sc, cmd, cmd->enccrd)) != 0)
- goto errout;
- if ((err = nlm_crypto_do_cipher_digest(sc, cmd)) != 0)
+ cmd->cipheroff = crp->crp_payload_start;
+ cmd->cipherlen = crp->crp_payload_length;
+ if (crp->crp_flags & CRYPTO_F_IV_SEPARATE) {
+ cmd->hashoff += cmd->ivlen;
+ cmd->cipheroff += cmd->ivlen;
+ cmd->ivoff = 0;
+ } else
+ cmd->ivoff = crp->crp_iv_start;
+ xlp_copyiv(sc, cmd, csp);
+ if ((err = nlm_crypto_do_cipher_digest(sc, cmd, csp)) != 0)
goto errout;
- } else {
+ break;
+ default:
err = EINVAL;
goto errout;
}
diff --git a/sys/mips/nlm/dev/sec/nlmseclib.c b/sys/mips/nlm/dev/sec/nlmseclib.c
index 01210c07cf33..f0b99135833c 100644
--- a/sys/mips/nlm/dev/sec/nlmseclib.c
+++ b/sys/mips/nlm/dev/sec/nlmseclib.c
@@ -92,18 +92,17 @@ nlm_crypto_complete_sec_request(struct xlp_sec_softc *sc,
}
int
-nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd)
+nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp)
{
unsigned int srcseg = 0, dstseg = 0;
- struct cryptodesc *cipdesc = NULL;
struct cryptop *crp = NULL;
crp = cmd->crp;
- cipdesc = cmd->enccrd;
- if (cipdesc != NULL) {
+ if (csp->csp_mode != CSP_MODE_DIGEST) {
/* IV is given as ONE segment to avoid copy */
- if (cipdesc->crd_flags & CRD_F_IV_EXPLICIT) {
+ if (crp->crp_flags & CRYPTO_F_IV_SEPARATE) {
srcseg = nlm_crypto_fill_src_seg(cmd->paramp, srcseg,
cmd->iv, cmd->ivlen);
dstseg = nlm_crypto_fill_dst_seg(cmd->paramp, dstseg,
@@ -111,32 +110,37 @@ nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd)
}
}
- if (crp->crp_flags & CRYPTO_F_IMBUF) {
+ switch (crp->crp_buf_type) {
+ case CRYPTO_BUF_MBUF:
+ {
struct mbuf *m = NULL;
- m = (struct mbuf *)crp->crp_buf;
+ m = crp->crp_mbuf;
while (m != NULL) {
srcseg = nlm_crypto_fill_src_seg(cmd->paramp, srcseg,
mtod(m,caddr_t), m->m_len);
- if (cipdesc != NULL) {
+ if (csp->csp_mode != CSP_MODE_DIGEST) {
dstseg = nlm_crypto_fill_dst_seg(cmd->paramp,
dstseg, mtod(m,caddr_t), m->m_len);
}
m = m->m_next;
}
- } else if (crp->crp_flags & CRYPTO_F_IOV) {
+ break;
+ }
+ case CRYPTO_BUF_UIO:
+ {
struct uio *uio = NULL;
struct iovec *iov = NULL;
int iol = 0;
- uio = (struct uio *)crp->crp_buf;
- iov = (struct iovec *)uio->uio_iov;
+ uio = crp->crp_uio;
+ iov = uio->uio_iov;
iol = uio->uio_iovcnt;
while (iol > 0) {
srcseg = nlm_crypto_fill_src_seg(cmd->paramp, srcseg,
(caddr_t)iov->iov_base, iov->iov_len);
- if (cipdesc != NULL) {
+ if (csp->csp_mode != CSP_MODE_DIGEST) {
dstseg = nlm_crypto_fill_dst_seg(cmd->paramp,
dstseg, (caddr_t)iov->iov_base,
iov->iov_len);
@@ -144,67 +148,75 @@ nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd)
iov++;
iol--;
}
- } else {
+ }
+ case CRYPTO_BUF_CONTIG:
srcseg = nlm_crypto_fill_src_seg(cmd->paramp, srcseg,
((caddr_t)crp->crp_buf), crp->crp_ilen);
- if (cipdesc != NULL) {
+ if (csp->csp_mode != CSP_MODE_DIGEST) {
dstseg = nlm_crypto_fill_dst_seg(cmd->paramp, dstseg,
((caddr_t)crp->crp_buf), crp->crp_ilen);
}
+ break;
}
return (0);
}
int
-nlm_crypto_do_cipher(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd)
+nlm_crypto_do_cipher(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp)
{
- struct cryptodesc *cipdesc = NULL;
- unsigned char *cipkey = NULL;
+ const unsigned char *cipkey = NULL;
int ret = 0;
- cipdesc = cmd->enccrd;
- cipkey = (unsigned char *)cipdesc->crd_key;
+ if (cmd->crp->crp_cipher_key != NULL)
+ cipkey = cmd->crp->crp_cipher_key;
+ else
+ cipkey = csp->csp_cipher_key;
if (cmd->cipheralg == NLM_CIPHER_3DES) {
- if (!(cipdesc->crd_flags & CRD_F_ENCRYPT)) {
- uint64_t *k, *tkey;
- k = (uint64_t *)cipdesc->crd_key;
+ if (!CRYPTO_OP_IS_ENCRYPT(cmd->crp->crp_op)) {
+ const uint64_t *k;
+ uint64_t *tkey;
+ k = (const uint64_t *)cipkey;
tkey = (uint64_t *)cmd->des3key;
tkey[2] = k[0];
tkey[1] = k[1];
tkey[0] = k[2];
- cipkey = (unsigned char *)tkey;
+ cipkey = (const unsigned char *)tkey;
}
}
nlm_crypto_fill_pkt_ctrl(cmd->ctrlp, 0, NLM_HASH_BYPASS, 0,
cmd->cipheralg, cmd->ciphermode, cipkey,
- (cipdesc->crd_klen >> 3), NULL, 0);
+ csp->csp_cipher_klen, NULL, 0);
nlm_crypto_fill_cipher_pkt_param(cmd->ctrlp, cmd->paramp,
- (cipdesc->crd_flags & CRD_F_ENCRYPT) ? 1 : 0, cmd->ivoff,
+ CRYPTO_OP_IS_ENCRYPT(cmd->crp->crp_op) ? 1 : 0, cmd->ivoff,
cmd->ivlen, cmd->cipheroff, cmd->cipherlen);
- nlm_crypto_form_srcdst_segs(cmd);
+ nlm_crypto_form_srcdst_segs(cmd, csp);
ret = nlm_crypto_complete_sec_request(sc, cmd);
return (ret);
}
int
-nlm_crypto_do_digest(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd)
+nlm_crypto_do_digest(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp)
{
- struct cryptodesc *digdesc = NULL;
+ const char *key;
int ret=0;
- digdesc = cmd->maccrd;
-
- nlm_crypto_fill_pkt_ctrl(cmd->ctrlp, (digdesc->crd_klen) ? 1 : 0,
+ if (cmd->crp->crp_auth_key != NULL)
+ key = cmd->crp->crp_auth_key;
+ else
+ key = csp->csp_auth_key;
+ nlm_crypto_fill_pkt_ctrl(cmd->ctrlp, csp->csp_auth_klen ? 1 : 0,
cmd->hashalg, cmd->hashmode, NLM_CIPHER_BYPASS, 0,
- NULL, 0, digdesc->crd_key, digdesc->crd_klen >> 3);
+ NULL, 0, key, csp->csp_auth_klen);
nlm_crypto_fill_auth_pkt_param(cmd->ctrlp, cmd->paramp,
cmd->hashoff, cmd->hashlen, cmd->hmacpad,
(unsigned char *)cmd->hashdest);
- nlm_crypto_form_srcdst_segs(cmd);
+ nlm_crypto_form_srcdst_segs(cmd, csp);
ret = nlm_crypto_complete_sec_request(sc, cmd);
@@ -213,48 +225,54 @@ nlm_crypto_do_digest(struct xlp_sec_softc *sc, struct xlp_sec_command *cmd)
int
nlm_crypto_do_cipher_digest(struct xlp_sec_softc *sc,
- struct xlp_sec_command *cmd)
+ struct xlp_sec_command *cmd, const struct crypto_session_params *csp)
{
- struct cryptodesc *cipdesc=NULL, *digdesc=NULL;
- unsigned char *cipkey = NULL;
+ const unsigned char *cipkey = NULL;
+ const char *authkey;
int ret=0;
- cipdesc = cmd->enccrd;
- digdesc = cmd->maccrd;
-
- cipkey = (unsigned char *)cipdesc->crd_key;
+ if (cmd->crp->crp_cipher_key != NULL)
+ cipkey = cmd->crp->crp_cipher_key;
+ else
+ cipkey = csp->csp_cipher_key;
+ if (cmd->crp->crp_auth_key != NULL)
+ authkey = cmd->crp->crp_auth_key;
+ else
+ authkey = csp->csp_auth_key;
if (cmd->cipheralg == NLM_CIPHER_3DES) {
- if (!(cipdesc->crd_flags & CRD_F_ENCRYPT)) {
- uint64_t *k, *tkey;
- k = (uint64_t *)cipdesc->crd_key;
+ if (!CRYPTO_OP_IS_ENCRYPT(cmd->crp->crp_op)) {
+ const uint64_t *k;
+ uint64_t *tkey;
+ k = (const uint64_t *)cipkey;
tkey = (uint64_t *)cmd->des3key;
tkey[2] = k[0];
tkey[1] = k[1];
tkey[0] = k[2];
- cipkey = (unsigned char *)tkey;
+ cipkey = (const unsigned char *)tkey;
}
}
- nlm_crypto_fill_pkt_ctrl(cmd->ctrlp, (digdesc->crd_klen) ? 1 : 0,
+ nlm_crypto_fill_pkt_ctrl(cmd->ctrlp, csp->csp_auth_klen ? 1 : 0,
cmd->hashalg, cmd->hashmode, cmd->cipheralg, cmd->ciphermode,
- cipkey, (cipdesc->crd_klen >> 3),
- digdesc->crd_key, (digdesc->crd_klen >> 3));
+ cipkey, csp->csp_cipher_klen,
+ authkey, csp->csp_auth_klen);
nlm_crypto_fill_cipher_auth_pkt_param(cmd->ctrlp, cmd->paramp,
- (cipdesc->crd_flags & CRD_F_ENCRYPT) ? 1 : 0, cmd->hashsrc,
+ CRYPTO_OP_IS_ENCRYPT(cmd->crp->crp_op) ? 1 : 0, cmd->hashsrc,
cmd->ivoff, cmd->ivlen, cmd->hashoff, cmd->hashlen,
cmd->hmacpad, cmd->cipheroff, cmd->cipherlen,
(unsigned char *)cmd->hashdest);
- nlm_crypto_form_srcdst_segs(cmd);
+ nlm_crypto_form_srcdst_segs(cmd, csp);
ret = nlm_crypto_complete_sec_request(sc, cmd);
return (ret);
}
int
-nlm_get_digest_param(struct xlp_sec_command *cmd)
+nlm_get_digest_param(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp)
{
- switch(cmd->maccrd->crd_alg) {
+ switch(csp->csp_auth_alg) {
case CRYPTO_MD5:
cmd->hashalg = NLM_HASH_MD5;
cmd->hashmode = NLM_HASH_MODE_SHA1;
@@ -278,9 +296,10 @@ nlm_get_digest_param(struct xlp_sec_command *cmd)
return (0);
}
int
-nlm_get_cipher_param(struct xlp_sec_command *cmd)
+nlm_get_cipher_param(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp)
{
- switch(cmd->enccrd->crd_alg) {
+ switch(csp->csp_cipher_alg) {
case CRYPTO_DES_CBC:
cmd->cipheralg = NLM_CIPHER_DES;
cmd->ciphermode = NLM_CIPHER_MODE_CBC;
diff --git a/sys/mips/nlm/dev/sec/nlmseclib.h b/sys/mips/nlm/dev/sec/nlmseclib.h
index ab7a13370fe7..2bbabd280663 100644
--- a/sys/mips/nlm/dev/sec/nlmseclib.h
+++ b/sys/mips/nlm/dev/sec/nlmseclib.h
@@ -91,7 +91,6 @@ extern unsigned int creditleft;
struct xlp_sec_command {
struct cryptop *crp;
- struct cryptodesc *enccrd, *maccrd;
struct xlp_sec_session *ses;
struct nlm_crypto_pkt_ctrl *ctrlp;
struct nlm_crypto_pkt_param *paramp;
@@ -116,8 +115,6 @@ struct xlp_sec_command {
struct xlp_sec_session {
int hs_mlen;
- uint8_t ses_iv[EALG_MAX_BLOCK_LEN];
- struct xlp_sec_command cmd;
};
/*
@@ -135,17 +132,22 @@ struct xlp_sec_softc {
#ifdef NLM_SEC_DEBUG
void print_crypto_params(struct xlp_sec_command *cmd, struct nlm_fmn_msg m);
-void xlp_sec_print_data(struct cryptop *crp);
void print_cmd(struct xlp_sec_command *cmd);
#endif
-int nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd);
+int nlm_crypto_form_srcdst_segs(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
int nlm_crypto_do_cipher(struct xlp_sec_softc *sc,
- struct xlp_sec_command *cmd);
+ struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
int nlm_crypto_do_digest(struct xlp_sec_softc *sc,
- struct xlp_sec_command *cmd);
+ struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
int nlm_crypto_do_cipher_digest(struct xlp_sec_softc *sc,
- struct xlp_sec_command *cmd);
-int nlm_get_digest_param(struct xlp_sec_command *cmd);
-int nlm_get_cipher_param(struct xlp_sec_command *cmd);
+ struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
+int nlm_get_digest_param(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
+int nlm_get_cipher_param(struct xlp_sec_command *cmd,
+ const struct crypto_session_params *csp);
#endif /* _NLMSECLIB_H_ */
diff --git a/sys/mips/nlm/hal/nlmsaelib.h b/sys/mips/nlm/hal/nlmsaelib.h
index 230b3740f401..6e1451beeb27 100644
--- a/sys/mips/nlm/hal/nlmsaelib.h
+++ b/sys/mips/nlm/hal/nlmsaelib.h
@@ -462,8 +462,8 @@ static __inline__ int
nlm_crypto_fill_pkt_ctrl(struct nlm_crypto_pkt_ctrl *ctrl, unsigned int hmac,
enum nlm_hash_algo hashalg, enum nlm_hash_mode hashmode,
enum nlm_cipher_algo cipheralg, enum nlm_cipher_mode ciphermode,
- unsigned char *cipherkey, unsigned int cipherkeylen,
- unsigned char *hashkey, unsigned int hashkeylen)
+ const unsigned char *cipherkey, unsigned int cipherkeylen,
+ const unsigned char *hashkey, unsigned int hashkeylen)
{
unsigned int taglen = 0, hklen = 0;
diff --git a/sys/netipsec/xform.h b/sys/netipsec/xform.h
index 910a88a706f3..85c9b65d1643 100644
--- a/sys/netipsec/xform.h
+++ b/sys/netipsec/xform.h
@@ -107,10 +107,11 @@ void xform_attach(void *);
void xform_detach(void *);
int xform_init(struct secasvar *, u_short);
-struct cryptoini;
+struct crypto_session_params;
/* XF_AH */
int xform_ah_authsize(const struct auth_hash *);
-extern int ah_init0(struct secasvar *, struct xformsw *, struct cryptoini *);
+int ah_init0(struct secasvar *, struct xformsw *,
+ struct crypto_session_params *);
extern int ah_zeroize(struct secasvar *sav);
extern size_t ah_hdrsiz(struct secasvar *);
diff --git a/sys/netipsec/xform_ah.c b/sys/netipsec/xform_ah.c
index 2ed9683a0572..834376634d5a 100644
--- a/sys/netipsec/xform_ah.c
+++ b/sys/netipsec/xform_ah.c
@@ -128,9 +128,7 @@ xform_ah_authsize(const struct auth_hash *esph)
alen = esph->hashsize / 2; /* RFC4868 2.3 */
break;
- case CRYPTO_AES_128_NIST_GMAC:
- case CRYPTO_AES_192_NIST_GMAC:
- case CRYPTO_AES_256_NIST_GMAC:
+ case CRYPTO_AES_NIST_GMAC:
alen = esph->hashsize;
break;
@@ -174,7 +172,8 @@ ah_hdrsiz(struct secasvar *sav)
* NB: public for use by esp_init.
*/
int
-ah_init0(struct secasvar *sav, struct xformsw *xsp, struct cryptoini *cria)
+ah_init0(struct secasvar *sav, struct xformsw *xsp,
+ struct crypto_session_params *csp)
{
const struct auth_hash *thash;
int keylen;
@@ -235,11 +234,10 @@ ah_init0(struct secasvar *sav, struct xformsw *xsp, struct cryptoini *cria)
sav->tdb_authalgxform = thash;
/* Initialize crypto session. */
- bzero(cria, sizeof (*cria));
- cria->cri_alg = sav->tdb_authalgxform->type;
- cria->cri_klen = _KEYBITS(sav->key_auth);
- cria->cri_key = sav->key_auth->key_data;
- cria->cri_mlen = AUTHSIZE(sav);
+ csp->csp_auth_alg = sav->tdb_authalgxform->type;
+ csp->csp_auth_klen = _KEYBITS(sav->key_auth) / 8;
+ csp->csp_auth_key = sav->key_auth->key_data;
+ csp->csp_auth_mlen = AUTHSIZE(sav);
return 0;
}
@@ -250,12 +248,14 @@ ah_init0(struct secasvar *sav, struct xformsw *xsp, struct cryptoini *cria)
static int
ah_init(struct secasvar *sav, struct xformsw *xsp)
{
- struct cryptoini cria;
+ struct crypto_session_params csp;
int error;
- error = ah_init0(sav, xsp, &cria);
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_DIGEST;
+ error = ah_init0(sav, xsp, &csp);
return error ? error :
- crypto_newsession(&sav->tdb_cryptoid, &cria, V_crypto_support);
+ crypto_newsession(&sav->tdb_cryptoid, &csp, V_crypto_support);
}
/*
@@ -560,7 +560,6 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
{
IPSEC_DEBUG_DECLARE(char buf[128]);
const struct auth_hash *ahx;
- struct cryptodesc *crda;
struct cryptop *crp;
struct xform_data *xd;
struct newah *ah;
@@ -628,7 +627,7 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
AHSTAT_ADD(ahs_ibytes, m->m_pkthdr.len - skip - hl);
/* Get crypto descriptors. */
- crp = crypto_getreq(1);
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: failed to acquire crypto descriptor\n",
__func__));
@@ -637,17 +636,9 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
goto bad;
}
- crda = crp->crp_desc;
- IPSEC_ASSERT(crda != NULL, ("null crypto descriptor"));
-
- crda->crd_skip = 0;
- crda->crd_len = m->m_pkthdr.len;
- crda->crd_inject = skip + rplen;
-
- /* Authentication operation. */
- crda->crd_alg = ahx->type;
- crda->crd_klen = _KEYBITS(sav->key_auth);
- crda->crd_key = sav->key_auth->key_data;
+ crp->crp_payload_start = 0;
+ crp->crp_payload_length = m->m_pkthdr.len;
+ crp->crp_digest_start = skip + rplen;
/* Allocate IPsec-specific opaque crypto info. */
xd = malloc(sizeof(*xd) + skip + rplen + authsize, M_XDATA,
@@ -686,13 +677,14 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
/* Crypto operation descriptor. */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length. */
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
+ crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = ah_input_cb;
- crp->crp_session = cryptoid;
- crp->crp_opaque = (caddr_t) xd;
+ crp->crp_opaque = xd;
/* These are passed as-is to the callback. */
xd->sav = sav;
@@ -725,8 +717,8 @@ ah_input_cb(struct cryptop *crp)
int authsize, rplen, ahsize, error, skip, protoff;
uint8_t nxt;
- m = (struct mbuf *) crp->crp_buf;
- xd = (struct xform_data *) crp->crp_opaque;
+ m = crp->crp_mbuf;
+ xd = crp->crp_opaque;
CURVNET_SET(xd->vnet);
sav = xd->sav;
skip = xd->skip;
@@ -866,7 +858,6 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
{
IPSEC_DEBUG_DECLARE(char buf[IPSEC_ADDRSTRLEN]);
const struct auth_hash *ahx;
- struct cryptodesc *crda;
struct xform_data *xd;
struct mbuf *mi;
struct cryptop *crp;
@@ -988,7 +979,7 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
SECASVAR_UNLOCK(sav);
/* Get crypto descriptors. */
- crp = crypto_getreq(1);
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: failed to acquire crypto descriptors\n",
__func__));
@@ -997,15 +988,9 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
goto bad;
}
- crda = crp->crp_desc;
- crda->crd_skip = 0;
- crda->crd_inject = skip + rplen;
- crda->crd_len = m->m_pkthdr.len;
-
- /* Authentication operation. */
- crda->crd_alg = ahx->type;
- crda->crd_key = sav->key_auth->key_data;
- crda->crd_klen = _KEYBITS(sav->key_auth);
+ crp->crp_payload_start = 0;
+ crp->crp_payload_length = m->m_pkthdr.len;
+ crp->crp_digest_start = skip + rplen;
/* Allocate IPsec-specific opaque crypto info. */
xd = malloc(sizeof(struct xform_data) + skip, M_XDATA,
@@ -1069,13 +1054,14 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
/* Crypto operation descriptor. */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length. */
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
+ crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = ah_output_cb;
- crp->crp_session = cryptoid;
- crp->crp_opaque = (caddr_t) xd;
+ crp->crp_opaque = xd;
/* These are passed as-is to the callback. */
xd->sp = sp;
diff --git a/sys/netipsec/xform_esp.c b/sys/netipsec/xform_esp.c
index 235d87ae1d98..c9c65aef6c4c 100644
--- a/sys/netipsec/xform_esp.c
+++ b/sys/netipsec/xform_esp.c
@@ -137,7 +137,7 @@ static int
esp_init(struct secasvar *sav, struct xformsw *xsp)
{
const struct enc_xform *txform;
- struct cryptoini cria, crie;
+ struct crypto_session_params csp;
int keylen;
int error;
@@ -193,11 +193,13 @@ esp_init(struct secasvar *sav, struct xformsw *xsp)
else
sav->ivlen = txform->ivsize;
+ memset(&csp, 0, sizeof(csp));
+
/*
* Setup AH-related state.
*/
if (sav->alg_auth != 0) {
- error = ah_init0(sav, xsp, &cria);
+ error = ah_init0(sav, xsp, &csp);
if (error)
return error;
}
@@ -231,35 +233,20 @@ esp_init(struct secasvar *sav, struct xformsw *xsp)
keylen, txform->name));
return EINVAL;
}
- bzero(&cria, sizeof(cria));
- cria.cri_alg = sav->tdb_authalgxform->type;
- cria.cri_key = sav->key_enc->key_data;
- cria.cri_klen = _KEYBITS(sav->key_enc) - SAV_ISGCM(sav) * 32;
- }
+ csp.csp_mode = CSP_MODE_AEAD;
+ } else if (sav->alg_auth != 0)
+ csp.csp_mode = CSP_MODE_ETA;
+ else
+ csp.csp_mode = CSP_MODE_CIPHER;
/* Initialize crypto session. */
- bzero(&crie, sizeof(crie));
- crie.cri_alg = sav->tdb_encalgxform->type;
- crie.cri_key = sav->key_enc->key_data;
- crie.cri_klen = _KEYBITS(sav->key_enc) - SAV_ISCTRORGCM(sav) * 32;
-
- if (sav->tdb_authalgxform && sav->tdb_encalgxform) {
- /* init both auth & enc */
- crie.cri_next = &cria;
- error = crypto_newsession(&sav->tdb_cryptoid,
- &crie, V_crypto_support);
- } else if (sav->tdb_encalgxform) {
- error = crypto_newsession(&sav->tdb_cryptoid,
- &crie, V_crypto_support);
- } else if (sav->tdb_authalgxform) {
- error = crypto_newsession(&sav->tdb_cryptoid,
- &cria, V_crypto_support);
- } else {
- /* XXX cannot happen? */
- DPRINTF(("%s: no encoding OR authentication xform!\n",
- __func__));
- error = EINVAL;
- }
+ csp.csp_cipher_alg = sav->tdb_encalgxform->type;
+ csp.csp_cipher_key = sav->key_enc->key_data;
+ csp.csp_cipher_klen = _KEYBITS(sav->key_enc) / 8 -
+ SAV_ISCTRORGCM(sav) * 4;
+ csp.csp_ivlen = txform->ivsize;
+
+ error = crypto_newsession(&sav->tdb_cryptoid, &csp, V_crypto_support);
return error;
}
@@ -289,7 +276,6 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
const struct auth_hash *esph;
const struct enc_xform *espx;
struct xform_data *xd;
- struct cryptodesc *crde;
struct cryptop *crp;
struct newesp *esp;
uint8_t *ivp;
@@ -369,7 +355,7 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
ESPSTAT_ADD(esps_ibytes, m->m_pkthdr.len - (skip + hlen + alen));
/* Get crypto descriptors */
- crp = crypto_getreq(esph && espx ? 2 : 1);
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: failed to acquire crypto descriptors\n",
__func__));
@@ -379,7 +365,7 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
}
/* Get IPsec-specific opaque pointer */
- xd = malloc(sizeof(*xd) + alen, M_XDATA, M_NOWAIT | M_ZERO);
+ xd = malloc(sizeof(*xd), M_XDATA, M_NOWAIT | M_ZERO);
if (xd == NULL) {
DPRINTF(("%s: failed to allocate xform_data\n", __func__));
ESPSTAT_INC(esps_crypto);
@@ -389,39 +375,24 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
}
if (esph != NULL) {
- struct cryptodesc *crda = crp->crp_desc;
-
- IPSEC_ASSERT(crda != NULL, ("null ah crypto descriptor"));
-
- /* Authentication descriptor */
- crda->crd_skip = skip;
+ crp->crp_op = CRYPTO_OP_VERIFY_DIGEST;
+ crp->crp_aad_start = skip;
if (SAV_ISGCM(sav))
- crda->crd_len = 8; /* RFC4106 5, SPI + SN */
+ crp->crp_aad_length = 8; /* RFC4106 5, SPI + SN */
else
- crda->crd_len = m->m_pkthdr.len - (skip + alen);
- crda->crd_inject = m->m_pkthdr.len - alen;
-
- crda->crd_alg = esph->type;
-
- /* Copy the authenticator */
- m_copydata(m, m->m_pkthdr.len - alen, alen,
- (caddr_t) (xd + 1));
-
- /* Chain authentication request */
- crde = crda->crd_next;
- } else {
- crde = crp->crp_desc;
+ crp->crp_aad_length = hlen;
+ crp->crp_digest_start = m->m_pkthdr.len - alen;
}
/* Crypto operation descriptor */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length */
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = esp_input_cb;
- crp->crp_session = cryptoid;
- crp->crp_opaque = (caddr_t) xd;
+ crp->crp_opaque = xd;
/* These are passed as-is to the callback */
xd->sav = sav;
@@ -431,13 +402,12 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
xd->vnet = curvnet;
/* Decryption descriptor */
- IPSEC_ASSERT(crde != NULL, ("null esp crypto descriptor"));
- crde->crd_skip = skip + hlen;
- crde->crd_len = m->m_pkthdr.len - (skip + hlen + alen);
- crde->crd_inject = skip + hlen - sav->ivlen;
+ crp->crp_op |= CRYPTO_OP_DECRYPT;
+ crp->crp_payload_start = skip + hlen;
+ crp->crp_payload_length = m->m_pkthdr.len - (skip + hlen + alen);
if (SAV_ISCTRORGCM(sav)) {
- ivp = &crde->crd_iv[0];
+ ivp = &crp->crp_iv[0];
/* GCM IV Format: RFC4106 4 */
/* CTR IV Format: RFC3686 4 */
@@ -452,10 +422,9 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
}
m_copydata(m, skip + hlen - sav->ivlen, sav->ivlen, &ivp[4]);
- crde->crd_flags |= CRD_F_IV_EXPLICIT;
- }
-
- crde->crd_alg = espx->type;
+ crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
+ } else if (sav->ivlen != 0)
+ crp->crp_iv_start = skip + hlen - sav->ivlen;
return (crypto_dispatch(crp));
bad:
@@ -471,22 +440,17 @@ static int
esp_input_cb(struct cryptop *crp)
{
IPSEC_DEBUG_DECLARE(char buf[128]);
- u_int8_t lastthree[3], aalg[AH_HMAC_MAXHASHLEN];
+ uint8_t lastthree[3];
const struct auth_hash *esph;
struct mbuf *m;
- struct cryptodesc *crd;
struct xform_data *xd;
struct secasvar *sav;
struct secasindex *saidx;
- caddr_t ptr;
crypto_session_t cryptoid;
int hlen, skip, protoff, error, alen;
- crd = crp->crp_desc;
- IPSEC_ASSERT(crd != NULL, ("null crypto descriptor!"));
-
- m = (struct mbuf *) crp->crp_buf;
- xd = (struct xform_data *) crp->crp_opaque;
+ m = crp->crp_mbuf;
+ xd = crp->crp_opaque;
CURVNET_SET(xd->vnet);
sav = xd->sav;
skip = xd->skip;
@@ -505,10 +469,15 @@ esp_input_cb(struct cryptop *crp)
CURVNET_RESTORE();
return (crypto_dispatch(crp));
}
- ESPSTAT_INC(esps_noxform);
- DPRINTF(("%s: crypto error %d\n", __func__, crp->crp_etype));
- error = crp->crp_etype;
- goto bad;
+
+ /* EBADMSG indicates authentication failure. */
+ if (!(crp->crp_etype == EBADMSG && esph != NULL)) {
+ ESPSTAT_INC(esps_noxform);
+ DPRINTF(("%s: crypto error %d\n", __func__,
+ crp->crp_etype));
+ error = crp->crp_etype;
+ goto bad;
+ }
}
/* Shouldn't happen... */
@@ -524,12 +493,7 @@ esp_input_cb(struct cryptop *crp)
if (esph != NULL) {
alen = xform_ah_authsize(esph);
AHSTAT_INC(ahs_hist[sav->alg_auth]);
- /* Copy the authenticator from the packet */
- m_copydata(m, m->m_pkthdr.len - alen, alen, aalg);
- ptr = (caddr_t) (xd + 1);
-
- /* Verify authenticator */
- if (timingsafe_bcmp(ptr, aalg, alen) != 0) {
+ if (crp->crp_etype == EBADMSG) {
DPRINTF(("%s: authentication hash mismatch for "
"packet in SA %s/%08lx\n", __func__,
ipsec_address(&saidx->dst, buf, sizeof(buf)),
@@ -666,7 +630,6 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
u_int idx, int skip, int protoff)
{
IPSEC_DEBUG_DECLARE(char buf[IPSEC_ADDRSTRLEN]);
- struct cryptodesc *crde = NULL, *crda = NULL;
struct cryptop *crp;
const struct auth_hash *esph;
const struct enc_xform *espx;
@@ -825,10 +788,10 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
prot = IPPROTO_ESP;
m_copyback(m, protoff, sizeof(u_int8_t), (u_char *) &prot);
- /* Get crypto descriptors. */
- crp = crypto_getreq(esph != NULL ? 2 : 1);
+ /* Get crypto descriptor. */
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
- DPRINTF(("%s: failed to acquire crypto descriptors\n",
+ DPRINTF(("%s: failed to acquire crypto descriptor\n",
__func__));
ESPSTAT_INC(esps_crypto);
error = ENOBUFS;
@@ -845,19 +808,14 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
goto bad;
}
- crde = crp->crp_desc;
- crda = crde->crd_next;
-
/* Encryption descriptor. */
- crde->crd_skip = skip + hlen;
- crde->crd_len = m->m_pkthdr.len - (skip + hlen + alen);
- crde->crd_flags = CRD_F_ENCRYPT;
- crde->crd_inject = skip + hlen - sav->ivlen;
+ crp->crp_payload_start = skip + hlen;
+ crp->crp_payload_length = m->m_pkthdr.len - (skip + hlen + alen);
+ crp->crp_op = CRYPTO_OP_ENCRYPT;
/* Encryption operation. */
- crde->crd_alg = espx->type;
if (SAV_ISCTRORGCM(sav)) {
- ivp = &crde->crd_iv[0];
+ ivp = &crp->crp_iv[0];
/* GCM IV Format: RFC4106 4 */
/* CTR IV Format: RFC3686 4 */
@@ -873,7 +831,10 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
}
m_copyback(m, skip + hlen - sav->ivlen, sav->ivlen, &ivp[4]);
- crde->crd_flags |= CRD_F_IV_EXPLICIT|CRD_F_IV_PRESENT;
+ crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
+ } else if (sav->ivlen != 0) {
+ crp->crp_iv_start = skip + hlen - sav->ivlen;
+ crp->crp_flags |= CRYPTO_F_IV_GENERATE;
}
/* Callback parameters */
@@ -885,23 +846,23 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
/* Crypto operation descriptor. */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length. */
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
+ crp->crp_flags |= CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = esp_output_cb;
- crp->crp_opaque = (caddr_t) xd;
- crp->crp_session = cryptoid;
+ crp->crp_opaque = xd;
if (esph) {
/* Authentication descriptor. */
- crda->crd_alg = esph->type;
- crda->crd_skip = skip;
+ crp->crp_op |= CRYPTO_OP_COMPUTE_DIGEST;
+ crp->crp_aad_start = skip;
if (SAV_ISGCM(sav))
- crda->crd_len = 8; /* RFC4106 5, SPI + SN */
+ crp->crp_aad_length = 8; /* RFC4106 5, SPI + SN */
else
- crda->crd_len = m->m_pkthdr.len - (skip + alen);
- crda->crd_inject = m->m_pkthdr.len - alen;
+ crp->crp_aad_length = hlen;
+ crp->crp_digest_start = m->m_pkthdr.len - alen;
}
return crypto_dispatch(crp);
diff --git a/sys/netipsec/xform_ipcomp.c b/sys/netipsec/xform_ipcomp.c
index 96cffd6305a4..0529b2dda7c5 100644
--- a/sys/netipsec/xform_ipcomp.c
+++ b/sys/netipsec/xform_ipcomp.c
@@ -156,7 +156,7 @@ static int
ipcomp_init(struct secasvar *sav, struct xformsw *xsp)
{
const struct comp_algo *tcomp;
- struct cryptoini cric;
+ struct crypto_session_params csp;
/* NB: algorithm really comes in alg_enc and not alg_comp! */
tcomp = comp_algorithm_lookup(sav->alg_enc);
@@ -170,10 +170,11 @@ ipcomp_init(struct secasvar *sav, struct xformsw *xsp)
sav->tdb_compalgxform = tcomp;
/* Initialize crypto session */
- bzero(&cric, sizeof (cric));
- cric.cri_alg = sav->tdb_compalgxform->type;
+ memset(&csp, 0, sizeof(csp));
+ csp.csp_mode = CSP_MODE_COMPRESS;
+ csp.csp_cipher_alg = sav->tdb_compalgxform->type;
- return crypto_newsession(&sav->tdb_cryptoid, &cric, V_crypto_support);
+ return crypto_newsession(&sav->tdb_cryptoid, &csp, V_crypto_support);
}
/*
@@ -195,9 +196,9 @@ static int
ipcomp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
{
struct xform_data *xd;
- struct cryptodesc *crdc;
struct cryptop *crp;
struct ipcomp *ipcomp;
+ crypto_session_t cryptoid;
caddr_t addr;
int error, hlen = IPCOMP_HLENGTH;
@@ -222,8 +223,12 @@ ipcomp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
goto bad;
}
+ SECASVAR_LOCK(sav);
+ cryptoid = sav->tdb_cryptoid;
+ SECASVAR_UNLOCK(sav);
+
/* Get crypto descriptors */
- crp = crypto_getreq(1);
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: no crypto descriptors\n", __func__));
IPCOMPSTAT_INC(ipcomps_crypto);
@@ -237,28 +242,26 @@ ipcomp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
crypto_freereq(crp);
goto bad;
}
- crdc = crp->crp_desc;
-
- crdc->crd_skip = skip + hlen;
- crdc->crd_len = m->m_pkthdr.len - (skip + hlen);
- crdc->crd_inject = skip;
/* Decompression operation */
- crdc->crd_alg = sav->tdb_compalgxform->type;
-
+ crp->crp_op = CRYPTO_OP_DECOMPRESS;
+ crp->crp_payload_start = skip + hlen;
+ crp->crp_payload_length = m->m_pkthdr.len - (skip + hlen);
/* Crypto operation descriptor */
crp->crp_ilen = m->m_pkthdr.len - (skip + hlen);
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = ipcomp_input_cb;
- crp->crp_opaque = (caddr_t) xd;
+ crp->crp_opaque = xd;
/* These are passed as-is to the callback */
xd->sav = sav;
xd->protoff = protoff;
xd->skip = skip;
xd->vnet = curvnet;
+ xd->cryptoid = cryptoid;
SECASVAR_LOCK(sav);
crp->crp_session = xd->cryptoid = sav->tdb_cryptoid;
@@ -288,8 +291,8 @@ ipcomp_input_cb(struct cryptop *crp)
int skip, protoff;
uint8_t nproto;
- m = (struct mbuf *) crp->crp_buf;
- xd = (struct xform_data *) crp->crp_opaque;
+ m = crp->crp_mbuf;
+ xd = crp->crp_opaque;
CURVNET_SET(xd->vnet);
sav = xd->sav;
skip = xd->skip;
@@ -396,9 +399,9 @@ ipcomp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
{
IPSEC_DEBUG_DECLARE(char buf[IPSEC_ADDRSTRLEN]);
const struct comp_algo *ipcompx;
- struct cryptodesc *crdc;
struct cryptop *crp;
struct xform_data *xd;
+ crypto_session_t cryptoid;
int error, ralen, maxpacketsize;
IPSEC_ASSERT(sav != NULL, ("null SA"));
@@ -466,25 +469,23 @@ ipcomp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
}
/* Ok now, we can pass to the crypto processing. */
+ SECASVAR_LOCK(sav);
+ cryptoid = sav->tdb_cryptoid;
+ SECASVAR_UNLOCK(sav);
/* Get crypto descriptors */
- crp = crypto_getreq(1);
+ crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
IPCOMPSTAT_INC(ipcomps_crypto);
DPRINTF(("%s: failed to acquire crypto descriptor\n",__func__));
error = ENOBUFS;
goto bad;
}
- crdc = crp->crp_desc;
/* Compression descriptor */
- crdc->crd_skip = skip;
- crdc->crd_len = ralen;
- crdc->crd_flags = CRD_F_COMP;
- crdc->crd_inject = skip;
-
- /* Compression operation */
- crdc->crd_alg = ipcompx->type;
+ crp->crp_op = CRYPTO_OP_COMPRESS;
+ crp->crp_payload_start = skip;
+ crp->crp_payload_length = ralen;
/* IPsec-specific opaque crypto info */
xd = malloc(sizeof(struct xform_data), M_XDATA, M_NOWAIT | M_ZERO);
@@ -502,17 +503,15 @@ ipcomp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
xd->skip = skip;
xd->protoff = protoff;
xd->vnet = curvnet;
+ xd->cryptoid = cryptoid;
/* Crypto operation descriptor */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length */
- crp->crp_flags = CRYPTO_F_IMBUF | CRYPTO_F_CBIFSYNC;
- crp->crp_buf = (caddr_t) m;
+ crp->crp_flags = CRYPTO_F_CBIFSYNC;
+ crp->crp_mbuf = m;
+ crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = ipcomp_output_cb;
- crp->crp_opaque = (caddr_t) xd;
-
- SECASVAR_LOCK(sav);
- crp->crp_session = xd->cryptoid = sav->tdb_cryptoid;
- SECASVAR_UNLOCK(sav);
+ crp->crp_opaque = xd;
return crypto_dispatch(crp);
bad:
@@ -538,8 +537,8 @@ ipcomp_output_cb(struct cryptop *crp)
u_int idx;
int error, skip, protoff;
- m = (struct mbuf *) crp->crp_buf;
- xd = (struct xform_data *) crp->crp_opaque;
+ m = crp->crp_mbuf;
+ xd = crp->crp_opaque;
CURVNET_SET(xd->vnet);
idx = xd->idx;
sp = xd->sp;
@@ -572,7 +57