Understanding `tls_codec` and it's traits
I have been working on adding support for the the Signed Certificate Timestamp extension in x509-cert crate. Since some data in this extension is TLS encoded I an using the tls_codec crate. But I'm a bit confused about the structure of some of the types and traits in tls_codec. Specifially:
- What's the difference between
TlsVecU8<T>andTlsByteVecU8. To me it looks likeTlsByteVecU8can be just a type alias:type TlsByteVecU8 = TlsVecU8<u8>. Is this true? If yes, why do we have two entirely separate types? - What is the difference between
SerializeandSerializeBytestraits (same forDeserializeandDeserializeBytes)? AreSerializeandDeserializemeant to be used only in anstdenvironment (because they use std::io::{Read, Write})? If yes, why can'tSerializeBytesandDeserializeByteswork for bothstdandno_std? - Why is
DeserializeBytesimpl'd forTlsVecU8<T>butSerializeBytesis not?
Answers to these would be greatly appreciated. Especially the last one because currently my types implement Serialize and DeserializeBytes which looks a bit asymmetric to me.
cc @tarcieri @franziskuskiefer
- What's the difference between
TlsVecU8<T>andTlsByteVecU8. To me it looks likeTlsByteVecU8can be just a type alias:type TlsByteVecU8 = TlsVecU8<u8>. Is this true? If yes, why do we have two entirely separate types?
It's not a type alias. It's a more efficient version of the same.
TlsVecU8<T> has to call T::tls_(de)serialize for each element while TlsByteVecU8 knows T==u8 and can thus skip the extra function call.
Unfortunately Rust doesn't allow us to overload functions and this is the quickest way around (even if not the most elegant one).
- What is the difference between
SerializeandSerializeBytestraits (same forDeserializeandDeserializeBytes)? AreSerializeandDeserializemeant to be used only in anstdenvironment (because they use std::io::{Read, Write})? If yes, why can'tSerializeBytesandDeserializeByteswork for bothstdandno_std?
Yes, the way the Serialize trait is defined it uses Read/Write, which requires std.
You can use (De)SerializeBytes for both, std and no_std but it's a different API.
- Why is
DeserializeBytesimpl'd forTlsVecU8<T>butSerializeBytesis not?
No one needed it yet. But it should certainly be done.
Thanks, I didn't realize TlsByteVecU8 is more efficient. I should definitely use it in my work. I'll stick to the SerializeBytes/DeserializeBytes versions as they work with both std and no_std. As for missing SerializeBytes impl, I'll raise a PR for that.
@franziskuskiefer a follow up question:
I replaced TlsVecU16<u8> with TlsByteVecU16 in my code but there is a difference in decoding. E.g.
let bytes = [0, 3, 2, 1, 0];//first two bytes are length prefix in big endian format, in this case 3 bytes.
let result = TlsByteVecU16::tls_deserialize(&bytes).unwrap();
println!("{result:?}");//prints (TlsByteVecU16 { vec: [0, 3, 2] }, [])
let result = TlsVecU16::<u8>::tls_deserialize(&bytes).unwrap();
println!("{result:?}");//prints (TlsVecU16 { vec: [2, 1, 0] }, [])
Looks like TlsByteVecU16 isn't skipping over the first two length prefix bytes. They should decode to identical bytes right?
Edit: BTW the code above deserializes using the DeserializeBytes trait.
Looks like
TlsByteVecU16isn't skipping over the first two length prefix bytes. They should decode to identical bytes right?
That sounds like a bug.
I think we can close this issue now?