-
Notifications
You must be signed in to change notification settings - Fork 18
feat: Added in-memory storage for testing purposes #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Thanks for assigning @tnull as a reviewer! |
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this!
Generally goes into the right direction, but we def. need to avoid re-allocating everything on every operation.
4980a75 to
25d57e3
Compare
|
@tnull Have done the required changes |
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks much better, but I think we still need to handle global_version properly, even if we're currently not using it client-side.
|
🔔 1st Reminder Hey @tankyleo! This PR has been waiting for your review. |
25d57e3 to
9012e95
Compare
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment, will take another look once @tankyleo also had a chance to do a review round here.
9012e95 to
3b434d0
Compare
|
@tankyleo Can you please review it! |
|
🔔 3rd Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 4th Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 5th Reminder Hey @tankyleo! This PR has been waiting for your review. |
tankyleo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay !
3b434d0 to
e0c31bb
Compare
|
@tankyleo Have done with the required changes! Can you please review it |
8003119 to
5898609
Compare
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When testing integration with LDK Node locally I found that the tests are currently failing. I now opened #62 to add LDK Node integration tests to our CI here. It would be great if that could land first, and we could also add a CI job for the in-memory store as part of this PR then, ensuring the implementation actually works as expected.
|
🔔 1st Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @tankyleo! This PR has been waiting for your review. |
|
@Harshdev098 Please rebase now that #62 landed to make use of the new CI checks here. |
81ca808 to
1f18bd8
Compare
|
🔔 5th Reminder Hey @tankyleo! This PR has been waiting for your review. |
84cfed6 to
939559f
Compare
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, and now you're hitting the errors that I originally found when running LDK Node against the in-memory store. Very likely, they are due to the in-memory implementation behaving in an unexpected manner. We need to fix this before this PR can get merged. Please take a look, but let me know if you end up getting stuck or need some assistance figuring this out.
|
🔔 6th Reminder Hey @tankyleo! This PR has been waiting for your review. |
51a791d to
0168750
Compare
|
Hey @tnull Have updated the code and the unit test are passing against the ldk node tests but didn't understand what is the cause of integration failures |
92bd1e3 to
bd3eca4
Compare
9ec0e79 to
45fecd1
Compare
tankyleo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay again I have cleared all the other priorities, this is now top priority :) Here are some comments, mostly on the types we pass to the different functions.
Will continue review tomorrow 100% ! Thank you again.
| ErrorKind::Other, | ||
| format!("Failed to drop database {}: {}", db_name, e), | ||
| ) | ||
| Error::new(ErrorKind::Other, format!("Failed to drop database {}: {}", db_name, e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry the formats in this file were done in a CI-specific PR we merged, go ahead and rebase and drop them thank you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Harshdev098 sorry i take it back, don't rebase to a new commit just yet :) if you can just drop these changes, keep the same base commit on main, will rebase to a newer base commit as necessary thank you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh sorry @tankyleo , but I have done it already, my branch is now up to date with main
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Harshdev098 want to make sure you know about git reflog ? Would have allowed you to revert it back easily before pushing but all good now !
rust/impls/src/in_memory_store.rs
Outdated
| let vss_delete_records: Vec<VssDbRecord> = request | ||
| .delete_items | ||
| .into_iter() | ||
| .map(|kv| build_vss_record(user_token.clone(), store_id.clone(), kv)) | ||
| .collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also do not need to build VssDbRecords for the delete_items, so let's delete those.
| ErrorKind::Other, | ||
| format!("Failed to drop database {}: {}", db_name, e), | ||
| ) | ||
| Error::new(ErrorKind::Other, format!("Failed to drop database {}: {}", db_name, e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh sorry @tankyleo , but I have done it already, my branch is now up to date with main
45fecd1 to
c2a140e
Compare
|
@tankyleo Have updated the code |
tankyleo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more comments focusing on in_memory_store thank you
|
|
||
| /// In-memory implementation of the VSS Store. | ||
| pub struct InMemoryBackendImpl { | ||
| store: Arc<RwLock<HashMap<String, VssDbRecord>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you share why you picked an RwLock here ? I always default to Mutex unless it will be read-heavy / have good reasons to use RwLock.
VSS is rather even-handed / write-heavy, so my gut instinct is to go for Mutex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually have used Mutex earlier but have changed to RwLock for debugging the integration test that was failing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good ok would you consider switching back to Mutex ? or are there some bugs that come up specifically when using Mutex ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching back to Mutex, it was deadlock issue at that time!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I found that testing the ldk node integration test against memory_store I need to restart the server to test after every updation, because I think starting the integration test it needs fresh data but it gives error when runned multiple times without restarting the server, which can make bad experience as a developer @tankyleo!?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Harshdev098 does this have to do with the choice between Mutex and RwLock ? I'll look into this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably not, you can reproduce it by just running the ldk node integration test multiple times with the same server without restarting it. I think it could be a problem with test because every time we run the test and its failing while setting up the first node and it's giving on Error value
ReadFailed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have you had success with passing the ldk integration test against in_memory just once ? I've passed against postgres all good, but in_memory_store seems to hang.
|
|
||
| /// In-memory implementation of the VSS Store. | ||
| pub struct InMemoryBackendImpl { | ||
| store: Arc<RwLock<HashMap<String, VssDbRecord>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we use a BTreeMap here to avoid having to resort all the keys on every subsequent call to list_key_versions further down below ?
| } | ||
|
|
||
| impl InMemoryBackendImpl { | ||
| /// Creates an in-memory instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not add comments that are obvious. I prefer we use them sparingly, only for non-obvious code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But its giving this missing documentation for an associated function when not added the comment, should I allow missing documentation for this #[allow(missing_docs)] ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah thanks yes keep those docs then my bad !
| if args.len() != 2 { | ||
| eprintln!("Usage: {} <config-file-path>", args[0]); | ||
| if args.len() < 2 { | ||
| eprintln!("Usage: {} <config-file-path> [--in-memory]", args[0]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this specific flag requested by a user ? I'd be in favor of deleting it, and keeping the in_memory configuration to a single spot, as we currently have it in the configuration file.
But let me know what you think, curious to hear your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added --in-memory to make it easy for external projects like Fedimint to run integration tests with our VSS server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good ok and the configuration file would be too much of a hassle for them ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yah, exactly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or we could change the "default" database to "in_memory" store so people can clone the repository, and run CI directly right ?
Also another question: could Fedimint just run against postgres ? We've done some work in the past to make setting that up much easier, as close to "just run it". Thank you for the context. You can see our own build-and-deploy-rust.yml in this repo runs test against a postgres service without too much trouble.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yah they can run it in postgres but the in-memory simplifies their devimint CI like no DB service needed, just spawn the binary,
And the in-memory flag, it's low-effort and keeps config as the single source while enabling overrides. I don't think to make the in_memory the default one!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok sound sgood let's keep things as you've got them now thank you
|
🔔 2nd Reminder Hey @tnull! This PR has been waiting for your review. |
Have added in_memory store for testing purpose.
We can edit config file to use specific store either postgresql or memory