Skip to content

Conversation

@ByteBaker
Copy link
Contributor

Add 112+ non-async unit tests across model modules to increase coverage:

  • content.rs: 34 tests for RawContent variants, type discrimination, meta preservation
  • prompt.rs: 24 tests for Prompt, PromptArgument, role handling, meta behavior
  • resource.rs: 21 tests for Resource, ResourceTemplate, URI schemes, equality
  • tool.rs: 33 tests for Tool, ToolAnnotations, schema handling, default behaviors

Motivation and Context

Better coverage.

How Has This Been Tested?

Yes. All unit tests pass.

Breaking Changes

None.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

Add 112+ non-async unit tests across model modules to increase coverage:
- content.rs: 34 tests for RawContent variants, type discrimination, meta preservation
- prompt.rs: 24 tests for Prompt, PromptArgument, role handling, meta behavior
- resource.rs: 21 tests for Resource, ResourceTemplate, URI schemes, equality
- tool.rs: 33 tests for Tool, ToolAnnotations, schema handling, default behaviors
@github-actions github-actions bot added T-core Core library changes T-model Model/data structure changes labels Oct 18, 2025
@alexhancock
Copy link
Contributor

I appreciate the contribution to improve testing! Thank you

I think we should reconsider some of them however. The value of a test is how many bugs it catches over its lifetime minus the number of times you have to change it when making routine code changes.

Asserting for the existence of properties that are set on a default instance of a class like this:

    #[test]
    fn test_prompt_new() {
        let prompt = Prompt::new("test_prompt", Some("A test prompt"), None);
        assert_eq!(prompt.name, "test_prompt");
        assert_eq!(prompt.description, Some("A test prompt".to_string()));
        assert_eq!(prompt.arguments, None);
        assert_eq!(prompt.title, None);
        assert_eq!(prompt.icons, None);
    }

feels like it will require us to make many test changes when we change models, but not necessarily catch bugs.

What do you think about a revision where we keep only the tests you've added here which rely on some processing internal to the model, and not just asserting on the existence of certain keys etc?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

T-core Core library changes T-model Model/data structure changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants