-
Notifications
You must be signed in to change notification settings - Fork 108
GATEWAYS-4306: exporting metrics for conntrack per zone #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Please provide a descriptive commit message. |
sure! for now updated the description |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The system provides zone-based monitoring capabilities for granular network traffic analysis and DDoS detection.
- Adds a new
ConntrackServicethat uses netlink to query kernel conntrack entries - Integrates the conntrack service into the main OVS client with proper lifecycle management
- Updates dependencies to support the new conntrack functionality
Reviewed Changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| ovsnl/conntrack.go | New service implementing conntrack entry retrieval and conversion from kernel data |
| ovsnl/client.go | Integration of ConntrackService into main client with initialization and cleanup |
| go.mod | Dependency updates including ti-mo/conntrack library and Go version upgrade |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
ovsnl/conntrack.go
Outdated
| // Start dump in goroutine | ||
| go func() { | ||
| defer close(flowChan) | ||
| flows, err := s.client.Dump(nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious how well Dump scales, and whether you should be using DumpFilter or DumpExpect instead (or maybe even some new variant that just counts if needed)
How many entries did you scale to , in your test setup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a POC, did not test yet on heavy traffic. Will check on DumpFilter or DumpExpect as well. Let me get back to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You won't need heavy traffic. Just scale up to like a million or two Conntrack entries and see if it performs well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the code. Checked with 1 million conntrack entries. It is working. The code needs a lot of cleanup. Just a heads up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's encouraging. While you are at it, could you try the max conntrack limit as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the snapshot from your scaled run, it does indicate the irq plateau for the duration of the run. I am assuming there were no CPU lockup messages from the kernel during this run, correct?
Did you get a chance to optimize the frequency of metrics collection?
On another note, this collection should be controlled with a config knob and we should slow roll this carefully.
Also cc @jcooperdo for another pair of eyes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@do-msingh was working on the issue you caught from screenshot. It was not properly doing conntrack count refresh. It looks under control now. As of now I tested seeding conntracks to a specific droplet in a hyperviser. In this screenshot you will see only around 400K jump because for easy testing I kept the timeout 10 mins. But actually created 2.6 Million conntracks only. I will test scenarios like adding timeout for 1 hr, seeding conntracks to multiple droplets (10-20), and see how the system performs. Will keep posted here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While testing with 1 hr timeout with 2.6M conntracks created against 1 droplet in a single zone. There are some small discrepencies due to processing delay: ( For example we have missed 12k events to count while running sync for 2.6M conntracks )
I can try to fix it later as an improvement task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the same way when I tested creating conntracks without my changes in openvswitch_exporter the graph looks like above
|
The build is failing due to go version in my local machine. In this repository we are using older version. When the code is signed off I will try to install the older version and push it. Kept it like this for now. |
ovsnl/conntrack.go
Outdated
|
|
||
| // NewZoneMarkAggregator creates a new aggregator with its own listening connection. | ||
| func NewZoneMarkAggregator(s *ConntrackService) (*ZoneMarkAggregator, error) { | ||
| log.Printf("Creating new conntrack zone mark aggregator...") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you remove these logs?
ovsnl/conntrack.go
Outdated
| return nil, fmt.Errorf("failed to create listening connection: %w", err) | ||
| } | ||
|
|
||
| log.Printf("Successfully created conntrack listening connection") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
ovsnl/conntrack.go
Outdated
| // Start dump in goroutine | ||
| go func() { | ||
| defer close(flowChan) | ||
| flows, err := s.client.Dump(nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script would be nice to integrate with chef and maybe export some metrics using node exporter so we can build some dashboards around it. In your tests, could you run at scale for an extended period like a couple of hours and check average CPU util? Do you only see CPU spikes around the time metrics are collected? For how long? Also @jcooper had a suggestion to reduce the frequency of collecting the metrics, or maybe optimizing it to reduce load.
Lastly, can you check dmesg output as well at scale to make sure we are not missing anything?
Co-authored-by: jcooperdo <46059766+jcooperdo@users.noreply.github.com>
ovsnl/conntrack_linux.go
Outdated
| if atomic.LoadInt64(&a.eventCount)%100 == 0 { | ||
| runtime.Gosched() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was testing with 2.6M conntracks sent to a single vm and also was conducting a test when multiple vms are receiving heavy conntrack. My intention is to tell the scheduler to give up the current goroutine's time slice and let others run. Found out this can be used to improve concurrency and responsiveness in high-throughput or tight-loop scenarios. Doing it for every event will be inefficient so I wanted to have the if condition
Co-authored-by: Anit Gandhi <agandhi@digitalocean.com>
Co-authored-by: Anit Gandhi <agandhi@digitalocean.com>
ovsnl/conntrack_linux.go
Outdated
|
|
||
| // eventWorker consumes events from eventsCh and handles them | ||
| func (a *ZoneMarkAggregator) eventWorker(workerID int) { | ||
| processedCount := 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure i see the value in this if all we're gonna do is just log it out
ovsnl/conntrack_linux.go
Outdated
| } | ||
|
|
||
| for i := 0; i < eventWorkerCount; i++ { | ||
| workerID := i // Capture the loop variable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not necessary in the recent versions of go
ovsnl/conntrack_linux.go
Outdated
| if a.eventCount.Load()%100 == 0 { | ||
| runtime.Gosched() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this still strongly feels like a smell
if we're starting 100 eventWorker goroutines, and it's still not able to handle it, we should look at changing the overall design of this concurrent work, not mess with the scheduling of goroutines with the runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me test it without this. And post a report here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as per testing softirq was around 1.3
| return fmt.Errorf("failed to create new listening connection: %w", err) | ||
| } | ||
| a.listenCli = listenCli | ||
| return a.startEventListener() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we're gonna start a new event listening loop, we should ensure the previous listening goroutine is canceled and exits cleanly
…mory enhancement Co-authored-by: Anit Gandhi <agandhi@digitalocean.com>
Co-authored-by: Anit Gandhi <agandhi@digitalocean.com>
ovsnl/client.go
Outdated
|
|
||
| c *genetlink.Conn | ||
| c *genetlink.Conn | ||
| Agg *ZoneMarkAggregator // lazily initialized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't see NewZoneMarkAggregator being called to actually set this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this can be deleted, I have used it in digitalocean/openvswitch_exporter#21 but differently. Removing it.
| } | ||
| } | ||
|
|
||
| a.wg.Wait() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how would this work in practice? nothing is telling all the goroutines to exit via stopCh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added stop for all channels
Co-authored-by: Anit Gandhi <agandhi@digitalocean.com>
|
we have moved the logic in digitalocean/openvswitch_exporter#21 |





This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The implementation provides zone-based monitoring capabilities, allowing for more granular network traffic analysis