Codecs values returned by Conversation Detail API

We are troubleshooting voice quality issues that we suspect may be related to the codec.

Retrieving Conversation Details via /api/v2/analytics/conversations/{conversationId}/details returns a codec value within the mediaEndpointStats object.

Is that codec supposed to be the one that was negotiated / agreed upon for that conversation?

The reason I say "supposed to" is because the returned values are currently "audio/opus" and "audio/PCM U" which I don't believe are supported by our current phones - Polycom SoundPoint IP 430.

We have currently configured our Trunk Preferred Codec List to be 1. audio/opus, 2. audio/PCMU, 3. audio/PCMA, 4. audio/g729 with the expectation that the conversations would negotiate to audio/g729.

Yes, the codecs property is documented as "The MIME types of the audio encodings used by the audio streams belonging to this endpoint". If the media encoding isn't behaving as expected, please open a case with PureCloud Care to investigate.

Thanks Tim.
After reviewing our configuration again I think it's working as expected.
One follow up question:
When I see two codecs referenced in one segment where an agent is interacting, does that usually indicate that one codec is being used by the call recording and another for the actual conversation?

Please open a case with PureCloud Care to investigate. The dev team has indicated they would like to look at the data for this conversation to see why you're getting multiple codecs in your specific case. Digging into customer-specific data is outside the scope of the dev forum.

Just created case 0002432052

Curiously, I noticed that we get more detail from the API when we request interactions from today vs. a few days ago. Not sure if that's part of the API contract.

I got a late bit of commentary about this. If you need a response about the specific values you're seeing or have follow up questions, please continue to work through the care ticket.

The array doesn't consist of segment-level media stats (we don't have such a thing at this time), but rather is the collection of "media endpoint" stats. In general, many media endpoints put together make up each communication (aka session) within a conversation. At the lowest level, each media endpoint can have multiple "streams". The codecs array tells you all the codecs that were used by streams in a given media endpoint. These are not application level concepts, but what the codecs array does for customers is gives them a bucket of all the codecs a session/communication used, so if they keep seeing bad audio correlating with a different codec from usual it could help them troubleshoot that.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.