-
Notifications
You must be signed in to change notification settings - Fork 421
Add support for mapping old fields to new ones in TLV read macros #3378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for mapping old fields to new ones in TLV read macros #3378
Conversation
shaavan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Look good to me, on going through the code!
Can we craft some tests for it?
| /// Wraps a `match self {..}` statement and scans the fields in the match patterns (in the form | ||
| /// `ref $field_name: $field_ty`) for types marked `legacy`, skipping those fields. | ||
| #[proc_macro] | ||
| pub fn skip_legacy_fields(expr: TokenStream) -> TokenStream { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can break the function into modular parts. Something like process_match_pattern for handling the Enum::Variant part and a process_field for the internal fields. It might help keep things organized and easier to follow!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly, this stuff is just kinda inscrutable :(. I'll add more comments and split it but I'm not sure how much it'll help...
lightning-macros/src/lib.rs
Outdated
| let is_init = macro_name == "_init_tlv_based_struct_field"; | ||
| let ty_tokens = mac.tokens.clone().into_iter().skip(2).next(); | ||
| if let Some(proc_macro2::TokenTree::Group(group)) = ty_tokens { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand, the input field for this would look something like:
field: _init_tlv_based_struct_field!(field_name, (legacy, ...))since the second element needs to be a group. I’m a bit unsure, though, about what exactly should go in place of ...—I’d love any insights on that!
Also, maybe it would be helpful to expand the docs a bit to clearly outline the expected input and behavior of the macro for future reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is inherently a group, anything wrapped in () or {} is a group, even if its just one token. That said, the code doesn't require the second element be a group, it will accept anything that isn't.
lightning-macros/src/lib.rs
Outdated
| res | ||
| } | ||
|
|
||
| /// Scans an enum definition for fields initialized to `LDK_DROP_LEGACY_FIELD_DEFINITION` and drops |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading through the code, I wasn’t quite able to figure out how LDK_DROP_LEGACY_FIELD_DEFINITION will be used in the end. I’d love to get some insights on that! Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its not, that's stale.
| let self_ident = stream.next().unwrap(); | ||
| expect_ident(&self_ident, Some("self")); | ||
| res.extend(proc_macro::TokenStream::from(self_ident)); | ||
|
|
||
| let token_to_stream = |tok| proc_macro::TokenStream::from(tok); | ||
|
|
||
| let arms = stream.next().unwrap(); | ||
| if let TokenTree::Group(group) = arms { | ||
| let mut new_arms = TokenStream::new(); | ||
|
|
||
| let mut arm_stream = group.stream().into_iter().peekable(); | ||
| while arm_stream.peek().is_some() { | ||
| let enum_ident = arm_stream.next().unwrap(); | ||
| let co1 = arm_stream.next().unwrap(); | ||
| expect_punct(&co1, ':'); | ||
| let co2 = arm_stream.next().unwrap(); | ||
| expect_punct(&co2, ':'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just putting it there for reference, if you are looking for a no dependencies parser for the proc macro I developed a PoC for the rust compiler a while back https://github.com/rsmicro/kproc-macros this simplifies the parsing a little bit IMHO, but not sure if it is worth for just a single proc macro
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh, I think its survivable for now, will see what others think.
10ce004 to
5113f3c
Compare
|
Oops, fixed CI, but I realize we kinda forgot about this one, and #3342 (might) depend on it. Tagging 0.2 for that reason. |
shaavan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM mod testing! 🚀
|
Per shaavan's previous feedback, can we test this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As async stuff is very different purpose from the serialization stuff, do we want to add ser.rs and async.rs sub-modules (potentially still re-exporting the macros at the crate-level)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bump.
| fn token_to_stream(token: TokenTree) -> proc_macro::TokenStream { | ||
| proc_macro::TokenStream::from(token) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we can get rid of this helper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It removes a ton of really tall lines because of additional map calls with the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| match self { | ||
| $($st::$variant_name { $(ref $field, )* .. } => { | ||
| lightning_macros::skip_legacy_fields!(match self { | ||
| $($st::$variant_name { $(ref $field: $fieldty, )* .. } => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you help me understand why the .. is necessary? It seems like all the fields in the variant should be present so they shouldn't be necessary, but I get a seemingly unrelated error if I remove the code that re-pushes them in the proc macro.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its used by the unread_variants thing. See RAAMonitorUpdateBlockingAction's serialization.
lightning-macros/src/lib.rs
Outdated
| /// ```ignore | ||
| /// drop_legacy_field_definition!(Self { | ||
| /// field1: _init_tlv_based_struct_field!(field1, option), | ||
| /// field2: _init_tlv_based_struct_field!(field2, (legacy, u64, {}, {})), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to understand what we'd want to put in the {}, {}. In my mind this use case is legacy fields that are no longer present and that's pretty much it -- do you have an example of what kind of custom logic would be useful here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the new test.
lightning/src/util/ser_macros.rs
Outdated
| /// If `$fieldty` is `(legacy, $ty, $read, $write)` then, when writing, the expression $write will | ||
| /// be called which returns an `Option` and is written as a TLV if `Some`. When reading, an | ||
| /// optional field of type `$ty` is read. The code in `$read` is always executed after all TLVs | ||
| /// have been read. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a legacy example to the example code below?
|
Linting is also failing |
4be6159 to
6f29478
Compare
|
Added a test and a few additional features since it makes using the stuff much easier. Had to rebase on upstream to get rustfmt checks on the files here. |
lightning/src/util/ser_macros.rs
Outdated
| encoded[3] = 10; | ||
| let read = <ExpandedField as Readable>::read(&mut &encoded[..]).unwrap(); | ||
| assert_eq!(read, ExpandedField { new_field: (10, 43) }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi!
So, I noticed that changing the serialized value for old_field ends up updating new_field during deserialization, which makes sense for backward compatibility.
That said, I was wondering about this scenario: if old_field were public and new_field private, someone aware of old_field could modify its value, which would indirectly change the private new_field. Do we intend for a private field to be affected this way by something external?
This caught my eye while going through the code, and I’d love to better understand the reasoning behind this design choice. Do let me what you think! Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I noticed that changing the serialized value for old_field ends up updating new_field during deserialization, which makes sense for backward compatibility.
That's because the read logic in the example does so explicitly.
That said, I was wondering about this scenario: if old_field were public and new_field private, someone aware of old_field could modify its value, which would indirectly change the private new_field. Do we intend for a private field to be affected this way by something external?
This logic exists to build a new_field while removing old_field. old_field no longer exists at all, its only a serialization-time construct, so it can't be modified in the struct, it only exists in the serialized copy.
6f29478 to
82c26ec
Compare
|
Rebased due to conflict. |
| if let TokenTree::Group(group) = ty_info { | ||
| let first_group_tok = group.stream().into_iter().next().unwrap(); | ||
| if let TokenTree::Ident(ident) = first_group_tok { | ||
| if ident.to_string() == "legacy" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious if there's some way to avoid the string allocation by converting ident to a proc_macro2 ident and comparing it to Ident::new("legacy", ident.span())?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe Ident::new("legacy", ident.span()) will create a String under the hood. Really Idents need to support fetching the string they likely have, but oh well...
| for field in new_fields { | ||
| if let syn::Expr::Macro(syn::ExprMacro { mac, .. }) = &field.expr { | ||
| let macro_name = mac.path.segments.last().unwrap().ident.to_string(); | ||
| let is_init = macro_name == "_ignore_arg"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I somehow would think that the ignore_macro would be expanded before the proc_macro would be able to view the macro name, if that makes sense. Any rules of thumb for how the macros get expanded? Tried to read this https://rustc-dev-guide.rust-lang.org/macro-expansion.html but it's a lot to take in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhhh, yea, uhhh, no idea? I think its outside to inside? But I never looked, this just works in tests :)
lightning/Cargo.toml
Outdated
| [dependencies] | ||
| lightning-types = { version = "0.3.0", path = "../lightning-types", default-features = false } | ||
| lightning-invoice = { version = "0.34.0", path = "../lightning-invoice", default-features = false } | ||
| lightning-macros = { version = "0.1", path = "../lightning-macros" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should depend on lightning-macros 0.2, and we need to bump lightning-macros version to 0.2.0+git.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, can we just make them all 0.0 now? :)
| /// Reads a TLV stream with the given fields to build a struct/enum of type `$thing` | ||
| #[doc(hidden)] | ||
| #[macro_export] | ||
| macro_rules! _decode_and_build { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add docs to all these internal helper macros describing their intended behavior? They are indeed inscrutable, and I'm sure I'll have forgotten about what they are doing before I encounter them again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what additional docs you want here, do you have preferred language? It reads a TLV stream and builds the object with the given fields.
| }); | ||
|
|
||
| #[test] | ||
| fn test_legacy_conversion() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding a test. I'd like to double-down on Val's request though: can we add fuzzing for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean fuzzing? Are you worried that the codegen is going to generate bad code or that there's some bug in the generated code which will lead to a panic. I'm not actually sure how to write a fuzzer in either case - the first we're fuzzing codegen, the second we have to define some set of behavior in the generated code and fuzz that, which is more likely where the issue lies (and if we don't cover some behavior then the fuzzer wont work).
82c26ec to
2e4ecbf
Compare
| impl_writeable_tlv_based!(ExpandedField, { | ||
| (0, old_field, (legacy, u8, { | ||
| // Sadly the type-checker needs some help | ||
| let _: &(u8, u8) = &new_field; | ||
| if let Some(old_field) = old_field { | ||
| new_field.0 = old_field; | ||
| } | ||
| }, |us: &ExpandedField| Some(us.new_field.0))), | ||
| (1, new_field, required), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If both TLV 0 and 1 are present, isn't running the read expression unnecessary (i.e., we would be overriding new_field.0 with the same value assuming the write expression is correct)? We only need to run the read expression if TLV 1 is not present, but wouldn't we have already failed since TLV 1 is marked required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, this is just the demo. Presumably a real implementation wouldn't do the overwriting if new_field is set already (assuming its an Option, which most new fields would be).
lightning/src/util/ser_macros.rs
Outdated
| // Sadly the type-checker needs some help | ||
| let _: &(u8, u8) = &new_field; | ||
| if let Some(old_field) = old_field { | ||
| new_field.0 = old_field; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused what a legitimate use case for the read closure would be. Do you have an example? It seems like default_value accomplishes a lot of its potential goals if a newer field isn't set on read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default_value doens't let you copy a value from an old field to a new one, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not following, in the ChannelDetails impl for example we copy from user_channel_id_low into the new field user_channel_id?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought default_value didn't, until I tried it, and then I learned it does :). Strictly speaking, this doesn't do anything that default_value can't do on the read end, so you could argue this is like default_value with a write-side half. It doesn't, however, have to return a field, unlike default_value which has to be an expression that returns the field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay, back to my original question then, is there an example of when we'd want that? Just want to make sure we're not adding extra complexity without a good reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
legacy's $read is a bit more flexible - it lets us run code after everything has been read, not just when we're reading a new field, and also lets us run code in relation to a field that we don't actually store at all, whereas default_value's expr is required to return a value which is then stored. This lets us, for example, use the value read in the legacy field to update a default_value field even if it was written.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So in that scenario, we released a version that wrote an incorrect or outdated TLV and legacy allows us to correct it in a following version? That seems to rely on a relevant legacy TLV being present. It seems like if we want to solve that issue, we should just have a general closure that runs after all TLVs are read (including legacy), rather than tying it to a specific legacy TLV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, I think you're right, I removed the read expression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR description needs updating now.
|
I think I was able to migrate impl_writeable_tlv_based!(ChannelDetails, {
(1, inbound_scid_alias, option),
(2, channel_id, required),
(3, channel_type, option),
(4, counterparty, required),
(5, outbound_scid_alias, option),
(6, funding_txo, option),
(7, config, option),
(8, short_channel_id, option),
(9, confirmations, option),
(10, channel_value_satoshis, required),
(12, unspendable_punishment_reserve, option),
// `user_channel_id` used to be a single u64 value. In order to remain backwards compatible with
// versions prior to 0.0.113, the u128 is serialized as two separate u64 values.
(14, _user_channel_id_low, (legacy, u64, {},
|us: &ChannelDetails| Some((us.user_channel_id << 64 >> 64) as u64))),
(16, _balance_msat, (legacy, u64, {}, |us: &ChannelDetails| {
Some(us.next_outbound_htlc_limit_msat)
})),
(18, outbound_capacity_msat, required),
(19, next_outbound_htlc_limit_msat, (default_value, outbound_capacity_msat.0.unwrap() as u64)),
(20, inbound_capacity_msat, required),
(21, next_outbound_htlc_minimum_msat, (default_value, 0)),
(22, confirmations_required, option),
(24, force_close_spend_delay, option),
(26, is_outbound, required),
(28, is_channel_ready, required),
(30, is_usable, required),
(32, is_announced, required),
(33, inbound_htlc_minimum_msat, option),
(35, inbound_htlc_maximum_msat, option),
(37, _user_channel_id_high_opt,
(legacy, u64, {}, |us: &ChannelDetails| Some((us.user_channel_id >> 64) as u64))),
(39, feerate_sat_per_1000_weight, option),
(41, channel_shutdown_state, option),
(43, pending_inbound_htlcs, optional_vec),
(45, pending_outbound_htlcs, optional_vec),
(47, user_channel_id,
(default_value,
((_user_channel_id_high_opt.unwrap_or(0) as u128) << 64) +
_user_channel_id_low.unwrap_or(0) as u128
)
),
}); |
Cool! Indeed we can. I tweaked your patch to not write a new field for |
lightning/src/util/ser_macros.rs
Outdated
| $({ | ||
| $crate::_run_legacy_tlv_read_logic!($field, $fieldty); | ||
| })* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand why this is called both here and from _run_legacy_tlv_read_logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused, this is calling _run_legacy_tlv_read_logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry. I meant from _decode_tlv_stream_range, but seems that it is skipped there because we're using SKIP_LEGACY here when calling that.
| impl<T: Clone> Clone for RequiredWrapper<T> { | ||
| fn clone(&self) -> Self { | ||
| Self(self.0.clone()) | ||
| } | ||
| } | ||
| impl<T: Copy> Copy for RequiredWrapper<T> {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is Clone needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy requires Clone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Err... then why is Copy needed? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't strictly needed, but it makes working with default_value and legacy a bit nicer. eg the new code for ChannelDetails uses it to make its use of default_value not require reaching into the RequiredWrapper and breaking the abstraction.
| (_unused, user_channel_id, (static_value, | ||
| _user_channel_id_low.unwrap_or(0) as u128 | ((_user_channel_id_high.unwrap_or(0) as u128) << 64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static_value doesn't have a path forward for eventually getting rid of the old fields (as opposed to default_value), though that may not be something we care about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it matters, tbh? Its fine to just split it.
lightning/src/util/ser_macros.rs
Outdated
| // Sadly the type-checker needs some help | ||
| let _: &(u8, u8) = &new_field; | ||
| if let Some(old_field) = old_field { | ||
| new_field.0 = old_field; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not following, in the ChannelDetails impl for example we copy from user_channel_id_low into the new field user_channel_id?
| /// | ||
| /// Specifically, it expects input like the following, simply dropping `field3` and the | ||
| /// `: $field_ty` after each field name. | ||
| /// ```ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems better to avoid ignore if possible, here and below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I can't get them to run in doctests 🤷♂️
|
CI sad as well |
4f2885d to
bcd6981
Compare
|
I believe all comments have been addressed. |
valentinewallace
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops thought I submitted these. LGTM otherwise
lightning/src/util/ser_macros.rs
Outdated
| // Sadly the type-checker needs some help | ||
| let _: &(u8, u8) = &new_field; | ||
| if let Some(old_field) = old_field { | ||
| new_field.0 = old_field; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So in that scenario, we released a version that wrote an incorrect or outdated TLV and legacy allows us to correct it in a following version? That seems to rely on a relevant legacy TLV being present. It seems like if we want to solve that issue, we should just have a general closure that runs after all TLVs are read (including legacy), rather than tying it to a specific legacy TLV?
|
LGTM modulo resolution on #3378 (comment). |
38ef967 to
d69a843
Compare
valentinewallace
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to squash IMO. Also CI is sad
90d3e87 to
0d1801a
Compare
|
Squashed with three small fixups: |
| (41, channel_shutdown_state, option), | ||
| (43, pending_inbound_htlcs, optional_vec), | ||
| (45, pending_outbound_htlcs, optional_vec), | ||
| (_unused, user_channel_id, (static_value, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use _ instead of _unused? The latter leaks implementation details of the macro and may be confusing to someone reading this, like mistakenly thinking user_channel_id is unused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_ doesn't qualify as an expr, apparently:
error: no rules expected the token `_`
--> lightning/src/ln/channel_state.rs:584:3
|
584 | (_, user_channel_id, (static_value,
| ^ no rules expected this token in macro call
|
::: lightning/src/util/ser_macros.rs:980:1
|
980 | macro_rules! impl_writeable_tlv_based {
| ------------------------------------- when calling this macro
lightning/src/util/ser_macros.rs
Outdated
| // Sadly the type-checker needs some help | ||
| let _: &(u8, u8) = &new_field; | ||
| if let Some(old_field) = old_field { | ||
| new_field.0 = old_field; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR description needs updating now.
| encoded[3] = 10; | ||
| let read = <ExpandedField as Readable>::read(&mut &encoded[..]).unwrap(); | ||
| assert_eq!(read, ExpandedField { new_field: (43, 42) }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to add another "sub-test" here? Something like this:
| // When the data is re-encoded, `old_field` is recomputed from the corresponding `new_field` value. | |
| // This ensures that any previously stored `old_field` values are replaced with the new derived value, | |
| // allowing us to phase out `old_field` over time. | |
| let re_encode = read.encode(); | |
| assert_eq!(re_encode, <Vec<u8>>::from_hex("0700012b01022b2a").unwrap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its impossible for it to be any different - there is only the new_field in memory (which the assert implicitly checked because it used the explicit struct syntax) so we can't calculate it via anything else :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it! Thanks for the clarification! 🚀
As we've grown, we regularly face a question of whether to "break
out" of our nice TLV-based struct/enum reading/writing macros in
order to handle mapping legacy fields to new ones, or deal with
keeping the legacy fields and handling at runtime what should be
hanlded at (de-)serialization time.
This attempts to address this tradeoff by adding support for a
"legacy" TLV type. This TLV style allows us to write a a value
which is not in the struct/enum's layout in memory but can be
calculated from its contents at write-time. The field will also be
read and can be use to calculate other fiels defined with
`static_value` or `default_value`.
It takes a type and a `$write` expression.
They are always read as `option`s to retain a future ability to
remove the `legacy` fields.
Sadly, there's two issues with doing this trivially which force us
into `proc-macro` land:
(a) when matching the original struct we want to list the fields
in the match arm so that we have them available to write.
Sadly, we can't call a macro to have it write out the field
name based on the field type, so instead need to pass the whole
match to a proc-macro and have it walk through to find the
types and skip fields that are `legacy`.
(b) when building a final struct/enum after reading, we need to
list a few `$field: $expr`s and cannot decide whether to
include a field based on a regular macro.
The proc-macros to do so aren't trivial, but they aren't that bad
either. We could instead try to rewrite our TLV stream processing
macros to handle a new set of TLVs which are passed via a separate
argument, but as TLVs are required to in ordered by type this
requires a good chunk of additional generated code in each TLV
write. It also would result in a somewhat less ergonomic callsite
as it would no longer fit into our existing list of TLVs.
In the previous commit we added the ability to write legacy fields without having a corresponding fields in the in-memory encoding of structs and enums and without breaking out of the high-level utility macros like `impl_writeable_tlv_based`. However, the write-side expr was required to be a static expr, without the ability to access `self`, limiting its ability to write useful fields. Here we swap it for a `Fn` which takes the object being serialized.
Allowing us to use `impl_writeable_tlv_based!` again rather than manually writing the de/ser logic.
We previously declined to document various read/write styles in `impl_writeable_tlv_based` to avoid committing to them in the public API, but they've been quite stable over time, so its not clear that that was worth anything. Here we at least document the read/write styles we already had documented in a comment at the top of the file, which isn't all of them but its all of the useful ones.
0d1801a to
6ae5b5f
Compare
|
Updated docs: $ git diff-tree -U1 0d1801af5 6ae5b5f07
diff --git a/lightning/src/util/ser_macros.rs b/lightning/src/util/ser_macros.rs
index 49e995d2c..8dbf0635d 100644
--- a/lightning/src/util/ser_macros.rs
+++ b/lightning/src/util/ser_macros.rs
@@ -15,17 +15,2 @@
-// There are quite a few TLV serialization "types" which behave differently. We currently only
-// publicly document the `optional` and `required` types, not supporting anything else publicly and
-// changing them at will.
-//
-// Some of the other types include:
-// * (default_value, $default) - reads optionally, reading $default if no TLV is present
-// * (static_value, $value) - ignores any TLVs, always using $value
-// * required_vec - reads into a Vec without a length prefix, failing if no TLV is present.
-// * optional_vec - reads into an Option<Vec> without a length prefix, continuing if no TLV is
-// present. Writes from a Vec directly, only if any elements are present. Note
-// that the struct deserialization macros return a Vec, not an Option.
-// * upgradable_option - reads via MaybeReadable.
-// * upgradable_required - reads via MaybeReadable, requiring a TLV be present but may return None
-// if MaybeReadable::read() returns None.
-
/// Implements serialization for a single TLV record.
@@ -413,3 +398,3 @@ macro_rules! _decode_tlv {
let _field: &Option<$fieldty> = &$field;
- _decode_tlv!($outer_reader, $reader, $field, option);
+ $crate::_decode_tlv!($outer_reader, $reader, $field, option);
}};
@@ -946,6 +931,13 @@ macro_rules! _decode_and_build {
-/// Implements [`Readable`]/[`Writeable`] for a struct storing it as a set of TLVs
+/// Implements [`Readable`]/[`Writeable`] for a struct storing it as a set of TLVs. Each TLV is
+/// read/written in the order they appear and contains a type number, a field name, and a
+/// de/serialization method, from the following:
+///
/// If `$fieldty` is `required`, then `$field` is a required field that is not an [`Option`] nor a [`Vec`].
/// If `$fieldty` is `(default_value, $default)`, then `$field` will be set to `$default` if not present.
+/// If `$fieldty` is `(static_value, $static)`, then `$field` will be set to `$static`.
/// If `$fieldty` is `option`, then `$field` is optional field.
+/// If `$fieldty` is `upgradable_option`, then `$field` is optional and read via [`MaybeReadable`].
+/// If `$fieldty` is `upgradable_required`, then `$field` is stored as an [`Option`] and read via
+/// [`MaybeReadable`], requiring the TLV to be present.
/// If `$fieldty` is `optional_vec`, then `$field` is a [`Vec`], which needs to have its individual elements serialized.
@@ -955,3 +947,3 @@ macro_rules! _decode_and_build {
/// `Some`. When reading, an optional field of type `$ty` is read (which can be used in later
-/// `default_value` or `static_value` fields).
+/// `default_value` or `static_value` fields by referring to the value by name).
///
@@ -965,2 +957,3 @@ macro_rules! _decode_and_build {
/// tlv_vec_type_integer: Vec<u32>,
+/// tlv_upgraded_integer: u32,
/// }
@@ -972,2 +965,4 @@ macro_rules! _decode_and_build {
/// (3, tlv_vec_type_integer, optional_vec),
+/// (4, unwritten_type, (legacy, u32, |us: &LightningMessage| Some(us.tlv_integer))),
+/// (_unused, tlv_upgraded_integer, (static_value, unwritten_type.unwrap_or(0) * 2))
/// });
@@ -976,2 +971,3 @@ macro_rules! _decode_and_build {
/// [`Readable`]: crate::util::ser::Readable
+/// [`MaybeReadable`]: crate::util::ser::MaybeReadable
/// [`Writeable`]: crate::util::ser::Writeable |

Uh oh!
There was an error while loading. Please reload this page.