OpenTelemetry Collector の OTLP receiver で受け取ったデータに Kubernetes attributes processor で attribute を付与する

OpenTelemetry Collector の Kubernates attribute processor は、Kubernetes cluster 上で動いている Pod の情報を保持し、それらの Pod から telemetry data が送られてきたら Pod に関連する metadata を attribute として付与する processor です。

README にも “By default, it associates the incoming connection IP to the Pod IP.” とあるように、Pod から OTLP receiver に送信したログなどに対しても attribute が付与されそうですが、DaemonSet で OpenTelemetry Collector を動かすと stdout に出力されたログにしか attribute が付与されなかったので、原因を調査して、OTLP receiver で受け取ったデータにも付与されるようにしてみました。

簡単のため、OpenTelemetry Collecotr の helm chart を使い、daemonset mode でインストールするものとします。
検証のために使ったコードや設定ファイルは github.com/abicky/opentelemetry-collector-k8s-example に置いてあります。

attribute が付与されない例

Kubernates attribute processor の動作確認に必要そうな最低限の設定として、次のような values.yaml を定義します。

values.yaml
mode: daemonset

image:
  repository: otel/opentelemetry-collector-k8s

presets:
  kubernetesAttributes:
    enabled: true
  logsCollection:
    enabled: true

config:
  exporters:
    debug:
      verbosity: detailed
  processors:
    k8sattributes:
      extract:
        annotations:
        - from: pod
          key_regex: ^resource\.opentelemetry\.io/(.+)$
          tag_name: $$1

上記の values.yaml を使って OpenTelemetry Collector をインストールします。

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
  --version 0.134.0 \
  -f values.yaml \
  --namespace opentelemetry-collector \
  --create-namespace

Collector configuration best practices を踏襲すると、次のように OTLP receiver を利用する Pod には OTEL_EXPORTER_OTLP_ENDPOINT=http://$(MY_HOST_IP):4317 を指定することになりますが、これだと Kubernates attribute processor は attribute を付与しません。

pod.yaml
apiVersion: v1
kind: Pod
metadata:
  generateName: hello-otel-
  annotations:
    resource.opentelemetry.io/service.name: hello
    resource.opentelemetry.io/service.version: 0.0.1
spec:
  containers:
  - name: hello-otel
    image: ghcr.io/abicky/opentelemetry-collector-k8s-example/hello-otel:latest
    env:
    - name: MY_HOST_IP
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP
    - name: OTEL_RESOURCE_ATTRIBUTES
      value: service.name=hello,service.version=0.0.1
    - name: OTEL_EXPORTER_OTLP_ENDPOINT
      value: http://$(MY_HOST_IP):4317
  restartPolicy: Never

Pod を作成してみます。

kubectl create -f pod.yaml

OpenTelemetry Collector のログは以下のとおりです。

OTLP receiver 関連のログ

2025-09-21T06:58:09.900Z        info    ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.37.0
Resource attributes:
     -> service.name: Str(hello)
     -> service.version: Str(0.0.1)
     -> telemetry.sdk.language: Str(go)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.38.0)
     -> k8s.pod.ip: Str(10.244.0.1)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope github.com/abicky/opentelemetry-collector-k8s-example
LogRecord #0
ObservedTimestamp: 2025-09-21 06:58:09.674347885 +0000 UTC
Timestamp: 2025-09-21 06:58:09.674336635 +0000 UTC
SeverityText: INFO
SeverityNumber: Info(9)
Body: Str(Hello World!)
Attributes:
     -> key1: Str(value1)
Trace ID: 259de42ec10699e1c67bc3ebd635840d
Span ID: b61cc14f6a1d0798
Flags: 1

Filelog receiver 関連のログ

2025-09-21T06:58:10.101Z        info    ResourceLog #0
Resource SchemaURL:
Resource attributes:
     -> k8s.pod.uid: Str(66af25e1-da87-487e-9c31-79c2ddfd6e9f)
     -> k8s.container.name: Str(hello-otel)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(hello-otel-zpnsn)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.start_time: Str(2025-09-21T06:58:07Z)
     -> k8s.node.name: Str(minikube)
     -> service.name: Str(hello)
     -> service.version: Str(0.0.1)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 2025-09-21 06:58:09.894962843 +0000 UTC
Timestamp: 2025-09-21 06:58:09.674437885 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str([INFO] Hello World!]
)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_hello-otel-zpnsn_66af25e1-da87-487e-9c31-79c2ddfd6e9f/hello-otel/0.log)
     -> log.iostream: Str(stdout)
Trace ID:
Span ID:
Flags: 0

OTLP receiver 関連のログには k8s.pod.name などの attribute が付与されていないことがわかります。また、k8s.pod.ip の値が 10.244.0.1 となっており、Pod の IP らしからぬものとなっています。

attribute が付与される例

次のように service.enabled: true を values.yaml に追加します。これによって internalTrafficPolicy: Local な Service が作成されます。

values.yaml
mode: daemonset

image:
  repository: otel/opentelemetry-collector-k8s

presets:
  kubernetesAttributes:
    enabled: true
  logsCollection:
    enabled: true

service:
  enabled: true

config:
  processors:
    k8sattributes:
      extract:
        annotations:
        - from: pod
          key_regex: ^resource\.opentelemetry\.io/(.+)$
          tag_name: $$1

上記の values.yaml を使って OpenTelemetry Collector をインストールします。

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
  --version 0.134.0 \
  -f values.yaml \
  --namespace opentelemetry-collector \
  --create-namespace

「attribute が付与される例」とは違い、OTLP receiver を利用する Pod では作成された Service の endpoint を使うよう OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector.opentelemetry-collector.svc.cluster.local:4317 を指定します。

pod.yaml
apiVersion: v1
kind: Pod
metadata:
  generateName: hello-otel-
  annotations:
    resource.opentelemetry.io/service.name: hello
    resource.opentelemetry.io/service.version: 0.0.1
spec:
  containers:
  - name: hello-otel
    image: ghcr.io/abicky/opentelemetry-collector-k8s-example/hello-otel:latest
    env:
    - name: OTEL_RESOURCE_ATTRIBUTES
      value: service.name=hello,service.version=0.0.1
    - name: OTEL_EXPORTER_OTLP_ENDPOINT
      value: http://opentelemetry-collector.opentelemetry-collector.svc.cluster.local:4317
  restartPolicy: Never

Pod を作成してみます。

kubectl create -f pod.yaml

OpenTelemetry Collector のログは以下のとおりです。

OTLP receiver 関連のログ

2025-09-21T07:05:00.917Z        info    ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.37.0
Resource attributes:
     -> service.name: Str(hello)
     -> service.version: Str(0.0.1)
     -> telemetry.sdk.language: Str(go)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.38.0)
     -> k8s.pod.ip: Str(10.244.0.12)
     -> k8s.pod.name: Str(hello-otel-mbzd9)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.start_time: Str(2025-09-21T07:04:58Z)
     -> k8s.pod.uid: Str(787de00a-454a-448f-9ce2-5b800018e32e)
     -> k8s.node.name: Str(minikube)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope github.com/abicky/opentelemetry-collector-k8s-example
LogRecord #0
ObservedTimestamp: 2025-09-21 07:05:00.779964622 +0000 UTC
Timestamp: 2025-09-21 07:05:00.779955706 +0000 UTC
SeverityText: INFO
SeverityNumber: Info(9)
Body: Str(Hello World!)
Attributes:
     -> key1: Str(value1)
Trace ID: f1667fc768434960f328a2cce545398d
Span ID: 9dc540ac9bdc4354
Flags: 1

Filelog receiver 関連のログ

2025-09-21T07:05:01.117Z        info    Logs    {"resource": {"service.instance.id": "dd6dc1f8-7952-4bba-97cc-b3531c553d63", "service.name": "otelcol-k8s", "service.version": "0.135.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "resource logs": 2, "log records": 37}
2025-09-21T07:05:01.118Z        info    ResourceLog #0
Resource SchemaURL:
Resource attributes:
     -> k8s.pod.name: Str(hello-otel-mbzd9)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(787de00a-454a-448f-9ce2-5b800018e32e)
     -> k8s.container.name: Str(hello-otel)
     -> k8s.namespace.name: Str(default)
     -> service.version: Str(0.0.1)
     -> service.name: Str(hello)
     -> k8s.pod.start_time: Str(2025-09-21T07:04:58Z)
     -> k8s.node.name: Str(minikube)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope
LogRecord #0
ObservedTimestamp: 2025-09-21 07:05:00.859715664 +0000 UTC
Timestamp: 2025-09-21 07:05:00.780197414 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str([INFO] Hello World!]
)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_hello-otel-mbzd9_787de00a-454a-448f-9ce2-5b800018e32e/hello-otel/0.log)
     -> log.iostream: Str(stdout)
Trace ID:
Span ID:
Flags: 0

OTLP receiver 関連のログにも k8s.pod.name などの attribute が付与されていることがわかります。また、k8s.pod.ip の値が 10.244.0.12 という Pod の IP っぽいものになりました。
なお、trace や metrics にも同様の attribute が付与されます。

どうして host IP を使うと attribute が付与されないのか?

まず、helm chart の daemonset mode で OpenTelemetry Collector をインストールすると、次のような DaemonSet が定義されます。

apiVersion: apps/v1
kind: DaemonSet
spec:
  template:
    spec:
      containers:
        - name: opentelemetry-collector
          args:
            - --config=/conf/relay.yaml
          securityContext:
            {}
          image: "otel/opentelemetry-collector-k8s:0.135.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: otlp
              containerPort: 4317
              protocol: TCP
              hostPort: 4317
-- snip --

host の 4317 port と container の 4317 port が mapping されているため、http://$(MY_HOST_IP):4317 で OTLP receiver が利用可能になっています。
この port mapping は iptables の NAT テーブルによって実現されます。minikube の場合、次のコマンドで定義を確認することができます。

minikube ssh 'sudo iptables -L -t nat -n'

OpenTelemetry Collector の Pod IP が 10.244.0.3 の場合、出力結果は次のようになります。

出力結果
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
DOCKER_OUTPUT  all  --  0.0.0.0/0            192.168.5.2         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
DOCKER_OUTPUT  all  --  0.0.0.0/0            192.168.5.2         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
CNI-HOSTPORT-MASQ  all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd requiring masquerade */
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
DOCKER_POSTROUTING  all  --  0.0.0.0/0            192.168.5.2         
CNI-63ca6a5e53e580099e762a37  all  --  10.244.0.2           0.0.0.0/0            /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */
CNI-49b91e9558fcd61a2007a64a  all  --  10.244.0.3           0.0.0.0/0            /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-49b91e9558fcd61a2007a64a (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */
MASQUERADE  all  --  0.0.0.0/0           !224.0.0.0/4          /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-63ca6a5e53e580099e762a37 (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */
MASQUERADE  all  --  0.0.0.0/0           !224.0.0.0/4          /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */

Chain CNI-DN-49b91e9558fcd61a2007a (2 references)
target     prot opt source               destination         
CNI-HOSTPORT-SETMARK  udp  --  10.244.0.0/16        0.0.0.0/0            udp dpt:6831
CNI-HOSTPORT-SETMARK  udp  --  127.0.0.1            0.0.0.0/0            udp dpt:6831
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:6831 to:10.244.0.3:6831
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:14250
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:14250
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:14250 to:10.244.0.3:14250
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:14268
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:14268
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:14268 to:10.244.0.3:14268
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:4317
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:4317
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:4317 to:10.244.0.3:4317
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:4318
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:4318
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:4318 to:10.244.0.3:4318
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:9411
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:9411
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9411 to:10.244.0.3:9411

Chain CNI-HOSTPORT-DNAT (2 references)
target     prot opt source               destination         
CNI-DN-49b91e9558fcd61a2007a  tcp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 14250,14268,4317,4318,9411
CNI-DN-49b91e9558fcd61a2007a  udp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 6831

Chain CNI-HOSTPORT-MASQ (1 references)
target     prot opt source               destination         
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000

Chain CNI-HOSTPORT-SETMARK (12 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd masquerade mark */ MARK or 0x2000

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER_OUTPUT (2 references)
target     prot opt source               destination         
DNAT       tcp  --  0.0.0.0/0            192.168.5.2          tcp dpt:53 to:127.0.0.11:39019
DNAT       udp  --  0.0.0.0/0            192.168.5.2          udp dpt:53 to:127.0.0.11:34808

Chain DOCKER_POSTROUTING (1 references)
target     prot opt source               destination         
SNAT       tcp  --  127.0.0.11           0.0.0.0/0            to:192.168.5.2:53
SNAT       udp  --  127.0.0.11           0.0.0.0/0            to:192.168.5.2:53

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-MARK-MASQ (8 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination         

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK xor 0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ random-fully

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-SEP-IT2ZTR26TO4XFPTO (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.2:53

Chain KUBE-SEP-N4G2XR5TDX7PQE7P (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:metrics */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics */ tcp to:10.244.0.2:9153

Chain KUBE-SEP-VPILYQBSPPXYB66K (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  192.168.49.2         0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:192.168.49.2:8443

Chain KUBE-SEP-YIL6JZP7A3QYXJU2 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns */
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.2:53

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */
KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */
KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */
KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */
KUBE-NODEPORTS  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */
KUBE-SEP-IT2ZTR26TO4XFPTO  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.2:53 */

Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */
KUBE-SEP-N4G2XR5TDX7PQE7P  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.2:9153 */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.1            /* default/kubernetes:https cluster IP */
KUBE-SEP-VPILYQBSPPXYB66K  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https -> 192.168.49.2:8443 */

Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  udp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns cluster IP */
KUBE-SEP-YIL6JZP7A3QYXJU2  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.2:53 */

関連しそうなもののみピックアップすると次のようになります。

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
CNI-HOSTPORT-MASQ  all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd requiring masquerade */
CNI-49b91e9558fcd61a2007a64a  all  --  10.244.0.3           0.0.0.0/0            /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-49b91e9558fcd61a2007a64a (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */
MASQUERADE  all  --  0.0.0.0/0           !224.0.0.0/4          /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-DN-49b91e9558fcd61a2007a (2 references)
target     prot opt source               destination         
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:4317
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:4317
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:4317 to:10.244.0.3:4317

Chain CNI-HOSTPORT-DNAT (2 references)
target     prot opt source               destination         
CNI-DN-49b91e9558fcd61a2007a  tcp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 14250,14268,4317,4318,9411
CNI-DN-49b91e9558fcd61a2007a  udp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 6831

Chain CNI-HOSTPORT-MASQ (1 references)
target     prot opt source               destination         
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000

Chain CNI-HOSTPORT-SETMARK (12 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd masquerade mark */ MARK or 0x2000

host の 4317 port へのリクエストは CNI-DN-49b91e9558fcd61a2007a の DNAT によって 10.244.0.3:4317 へのリクエストに変換され、10.244.0.3:4317 へのリクエストは CNI-49b91e9558fcd61a2007a64a の MASQUERADE によって送信元(Pod)のアドレスが変換されることがわかります。

よって、本来であれば Kubernates attribute processor は connection IP に対応する Pod に関連する attribute を付与するわけですが、この connection IP がリクエスト元の Pod と異なるため、対応する Pod を見つけることができないわけです。

なお、上記の例では minikube を利用していますが、Azure Kubernetes Service でも CNI-HOSTPORT-DNATCNI-HOSTPORT-MASQ などが定義されます。次のようなコマンドで確認できます。

kubectl debug $(kubectl get nodes --output name) -i \
  --image=mcr.microsoft.com/cbl-mariner/busybox:2.0 \
  --profile=sysadmin \
  -- chroot /host iptables -L -t nat -n

どうして Service を使うと attribute が付与されるのか?

前述のとおり、values.yaml で service.enabled: true を指定することで internalTrafficPolicy: Local な Service が作成されます。
Service の定義を一部抜粋すると以下のとおりです。

apiVersion: v1
kind: Service
metadata:
  name: opentelemetry-collector
  namespace: opentelemetry-collector
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: opentelemetry-collector
    app.kubernetes.io/instance: opentelemetry-collector
    component: agent-collector
  internalTrafficPolicy: Local
  ports:
    - name: otlp
      port: 4317
      targetPort: 4317
      protocol: TCP
      appProtocol: grpc

internalTrafficPolicyLocal の場合、同一 node に対してしかリクエストを送れなくなってしまいますが、その代わり送信元 Pod の IP アドレスは維持されます。1

よって、connection IP の情報から対応する Pod の情報を取得することができるというわけです。

参考までに iptables -L -t nat -n の出力結果は次のとおりです。

出力結果
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
DOCKER_OUTPUT  all  --  0.0.0.0/0            192.168.5.2         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
DOCKER_OUTPUT  all  --  0.0.0.0/0            192.168.5.2         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
CNI-HOSTPORT-DNAT  all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
CNI-HOSTPORT-MASQ  all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd requiring masquerade */
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
DOCKER_POSTROUTING  all  --  0.0.0.0/0            192.168.5.2         
CNI-63ca6a5e53e580099e762a37  all  --  10.244.0.2           0.0.0.0/0            /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */
CNI-49b91e9558fcd61a2007a64a  all  --  10.244.0.3           0.0.0.0/0            /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-49b91e9558fcd61a2007a64a (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */
MASQUERADE  all  --  0.0.0.0/0           !224.0.0.0/4          /* name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */

Chain CNI-63ca6a5e53e580099e762a37 (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */
MASQUERADE  all  --  0.0.0.0/0           !224.0.0.0/4          /* name: "bridge" id: "91c4bb4e8c3975c045ab9b40179a31b673c18adeed0261e72a93480a852deb63" */

Chain CNI-DN-49b91e9558fcd61a2007a (2 references)
target     prot opt source               destination         
CNI-HOSTPORT-SETMARK  udp  --  10.244.0.0/16        0.0.0.0/0            udp dpt:6831
CNI-HOSTPORT-SETMARK  udp  --  127.0.0.1            0.0.0.0/0            udp dpt:6831
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:6831 to:10.244.0.3:6831
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:14250
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:14250
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:14250 to:10.244.0.3:14250
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:14268
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:14268
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:14268 to:10.244.0.3:14268
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:4317
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:4317
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:4317 to:10.244.0.3:4317
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:4318
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:4318
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:4318 to:10.244.0.3:4318
CNI-HOSTPORT-SETMARK  tcp  --  10.244.0.0/16        0.0.0.0/0            tcp dpt:9411
CNI-HOSTPORT-SETMARK  tcp  --  127.0.0.1            0.0.0.0/0            tcp dpt:9411
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9411 to:10.244.0.3:9411

Chain CNI-HOSTPORT-DNAT (2 references)
target     prot opt source               destination         
CNI-DN-49b91e9558fcd61a2007a  tcp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 14250,14268,4317,4318,9411
CNI-DN-49b91e9558fcd61a2007a  udp  --  0.0.0.0/0            0.0.0.0/0            /* dnat name: "bridge" id: "f74edbf5656551c51b28f524a13fbca17bfd0577d0faf1f561932a090f99255f" */ multiport dports 6831

Chain CNI-HOSTPORT-MASQ (1 references)
target     prot opt source               destination         
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000

Chain CNI-HOSTPORT-SETMARK (12 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            /* CNI portfwd masquerade mark */ MARK or 0x2000

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER_OUTPUT (2 references)
target     prot opt source               destination         
DNAT       tcp  --  0.0.0.0/0            192.168.5.2          tcp dpt:53 to:127.0.0.11:39019
DNAT       udp  --  0.0.0.0/0            192.168.5.2          udp dpt:53 to:127.0.0.11:34808

Chain DOCKER_POSTROUTING (1 references)
target     prot opt source               destination         
SNAT       tcp  --  127.0.0.11           0.0.0.0/0            to:192.168.5.2:53
SNAT       udp  --  127.0.0.11           0.0.0.0/0            to:192.168.5.2:53

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-MARK-MASQ (20 references)
target     prot opt source               destination         
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination         

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK xor 0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ random-fully

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-SEP-4LPIWUPUF5AGFYOT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp-http */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp-http */ tcp to:10.244.0.3:4318

Chain KUBE-SEP-7OD2PDG3PD7J2VZ7 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-grpc */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-grpc */ tcp to:10.244.0.3:14250

Chain KUBE-SEP-CQ3FDE7FTJ5XNAXT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-thrift */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-thrift */ tcp to:10.244.0.3:14268

Chain KUBE-SEP-CRMKBXNSVTMUHCM2 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp */ tcp to:10.244.0.3:4317

Chain KUBE-SEP-DEN5DS53K2NB2AKB (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:zipkin */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:zipkin */ tcp to:10.244.0.3:9411

Chain KUBE-SEP-IT2ZTR26TO4XFPTO (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.2:53

Chain KUBE-SEP-N4G2XR5TDX7PQE7P (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:metrics */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics */ tcp to:10.244.0.2:9153

Chain KUBE-SEP-QAAOH7NY3YNSNY24 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-compact */
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-compact */ udp to:10.244.0.3:6831

Chain KUBE-SEP-VPILYQBSPPXYB66K (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  192.168.49.2         0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:192.168.49.2:8443

Chain KUBE-SEP-YIL6JZP7A3QYXJU2 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns */
DNAT       udp  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.2:53

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */
KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */
KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */
KUBE-SVL-VX6IA5VL6FLYVBXT  tcp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:otlp-http cluster IP */
KUBE-SVL-WQZRZBSAAJJ5ENNE  tcp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:zipkin cluster IP */
KUBE-SVL-AR7YBPPNY2K6GKT3  udp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-compact cluster IP */
KUBE-SVL-7S5WBWADAYVHRMPZ  tcp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-grpc cluster IP */
KUBE-SVL-GQJJUNO6YZCBZIVL  tcp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-thrift cluster IP */
KUBE-SVL-7E5QNFKIEQM22BF2  tcp  --  0.0.0.0/0            10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:otlp cluster IP */
KUBE-NODEPORTS  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */
KUBE-SEP-IT2ZTR26TO4XFPTO  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.2:53 */

Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */
KUBE-SEP-N4G2XR5TDX7PQE7P  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.2:9153 */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.96.0.1            /* default/kubernetes:https cluster IP */
KUBE-SEP-VPILYQBSPPXYB66K  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https -> 192.168.49.2:8443 */

Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  udp  -- !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns cluster IP */
KUBE-SEP-YIL6JZP7A3QYXJU2  all  --  0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.2:53 */

Chain KUBE-SVL-7E5QNFKIEQM22BF2 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:otlp cluster IP */
KUBE-SEP-CRMKBXNSVTMUHCM2  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp -> 10.244.0.3:4317 */

Chain KUBE-SVL-7S5WBWADAYVHRMPZ (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-grpc cluster IP */
KUBE-SEP-7OD2PDG3PD7J2VZ7  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-grpc -> 10.244.0.3:14250 */

Chain KUBE-SVL-AR7YBPPNY2K6GKT3 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  udp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-compact cluster IP */
KUBE-SEP-QAAOH7NY3YNSNY24  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-compact -> 10.244.0.3:6831 */

Chain KUBE-SVL-GQJJUNO6YZCBZIVL (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:jaeger-thrift cluster IP */
KUBE-SEP-CQ3FDE7FTJ5XNAXT  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:jaeger-thrift -> 10.244.0.3:14268 */

Chain KUBE-SVL-VX6IA5VL6FLYVBXT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:otlp-http cluster IP */
KUBE-SEP-4LPIWUPUF5AGFYOT  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:otlp-http -> 10.244.0.3:4318 */

Chain KUBE-SVL-WQZRZBSAAJJ5ENNE (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.105.249.160       /* opentelemetry-collector/opentelemetry-collector:zipkin cluster IP */
KUBE-SEP-DEN5DS53K2NB2AKB  all  --  0.0.0.0/0            0.0.0.0/0            /* opentelemetry-collector/opentelemetry-collector:zipkin -> 10.244.0.3:9411 */

なお、Pod の起動直後のログだと、Kubernates attribute processor が telemetry data を受けった時点でまだ Kubernates attribute processor が Pod を検知できていないことがあり、その場合は attribute が付与されません。サンプルコードでは起動直後に 1 秒 sleep しています

Kubernates attribute processor が付与し損なうと困る attribute に関しては OTEL_RESOURCE_ATTRIBUTES で指定するのが良いでしょう。
Pod の定義で resource.opentelemetry.io/* annotation と OTEL_RESOURCE_ATTRIBUTES に同じ値を指定しているのはそのためです。前者が Filelog receiver 用、後者が OTLP receiver 用で、起動直後のログなどに service.name などが付与されないことを許容できるなら OTEL_RESOURCE_ATTRIBUTES は不要です。

  1. minikube と Azure Kubernetes Service (v1.33.3) で試したところ、どうも iptables の定義的にも、動作確認結果的にも、同じネットワーク内に存在する Pod 同士の通信であれば internalTrafficPolicyCluster でも Pod の IP は変換されないようで、違いはリクエスト先が固定になるかランダムになるかだけみたいです