we have the following scenario :we have an eks cluster, in which we run applications for 2 clients, clientA and clientB .we want logs from namespace A to go to datadog organization A, logs from namespace B to datadog org. B. i see dual shipping can be configured, but this will mean sending 2 times the same data, and then filtering at pipeline level. can we use filtering at datadog agent level ? so we don’t send the data multiple times ?
Hey the correct answer here really depends on the scale of logs in my opinion. The simple answer is to use <https://docs.datadoghq.com/agent/logs/advanced_log_collection/?tab=configurationfile#filter-logs|agent level filtering> and matching on simple patterns.
Now, if you were to do this at scale where you’re processing hundreds of TBs of logs from different orgs/teams, I’d probably push you towards a more enterprise solution like setting up <https://docs.datadoghq.com/observability_pipelines/setup/?tab=docker|Vector/Observability Pipelines> to handle the log processing for you and just using the DD agent as a collector/log forwarder.
if i use the vector observability pipelines, i will use the datadog agent as a source. but can i use multiple sinks of type : datadog_logs ? and how i sent the data from k8s to the various sinks ? i could not find any example in the obs. pipeline worker helm-chart. can i realize this using only one datadog agent in the cluster ? logs from namespace A → org. A, logs from namespace B → org B. and maybe further the rest of the logs should go to another org. is this posible ?