Streaming using a bare gRPC client
Generate a gRPC client from scratch
This guide shows how to use the Behavioral Signals Streaming API directly with gRPC clients generated from the Protocol Buffer definition, without using the Python SDK wrapper.
1. Generating gRPC Client Code
Prerequisites
pip install grpcio grpcio-tools pydub
Generate Python Client
- Create a
protos/
directory in your project - Create an
api.proto
file from our Protocol Buffer documentation - Run:
python -m grpc_tools.protoc \
--proto_path=protos/ \
--python_out=. \
--grpc_python_out=. \
protos/api.proto
This generates:
api_pb2.py
- Message classes (AudioStream, StreamResult, etc.)api_pb2_grpc.py
- Service stub classes (BehavioralStreamingApiStub)
2. Service Endpoints
The API provides two streaming services:
Service | Method | Purpose |
---|---|---|
BehavioralStreamingApi | StreamAudio | Real-time behavioral analysis |
BehavioralStreamingApi | DeepfakeDetection | Real-time deepfake detection |
3. Message Protocol Conventions
First Message Rule
The first
AudioStream
message must contain only the configuration, without any audio data (that are streamed in subsequent messages).
def requests():
# First message: Configuration only (no audio!)
yield pb.AudioStream(
cid=your_cid,
x_auth_token="your_api_key",
config=pb.AudioConfig(
encoding=pb.AudioEncoding.LINEAR_PCM,
sample_rate_hertz=16000
)
)
# Subsequent messages: Audio chunks only (no config!)
for audio_chunk in audio_chunks:
yield pb.AudioStream(
cid=your_cid,
x_auth_token="your_api_key",
audio_content=audio_chunk
)
Required in every message
The cid and token are obtained by creating a new project
cid
- Your project ID (integer)x_auth_token
- Your API key (string)
4. Audio Requirements
- Format: 16-bit PCM, mono, 16kHz
- Chunk size: Variable, we recommend setting it between 100ms - 500ms
- Encoding:
LINEAR_PCM
- only supported as of now
5. Complete Examples
Behavioral Analysis Streaming
This is an example app that streams an audio file and displays the behavioral results:
import grpc
from pydub import AudioSegment
import api_pb2 as pb
import api_pb2_grpc as pb_grpc
# Replace these
CLIENT_ID = 12345 # Replace with your actual project ID
API_KEY = "your_api_key" # Replace with your actual API key
AUDIO_FILE = "test.wav" # Replace with your audio file path here
audio = AudioSegment.from_file(AUDIO_FILE)
audio = audio.set_frame_rate(16000).set_channels(1).set_sample_width(2)
channel = grpc.secure_channel(
"streaming.behavioralsignals.com:443", grpc.ssl_channel_credentials()
)
stub = pb_grpc.BehavioralStreamingApiStub(channel)
def requests():
yield pb.AudioStream(
cid=CLIENT_ID,
x_auth_token=API_KEY,
config=pb.AudioConfig(
encoding=pb.AudioEncoding.LINEAR_PCM, sample_rate_hertz=16000
),
)
chunk_size = int(16000 * 0.25 * 2)
for i in range(0, len(audio.raw_data), chunk_size):
chunk = audio.raw_data[i : i + chunk_size]
yield pb.AudioStream(cid=CLIENT_ID, x_auth_token=API_KEY, audio_content=chunk)
try:
for response in stub.StreamAudio(requests()):
print(f"\n--- Message {response.message_id} ---")
for result in response.result:
# Show timing info!
print(
f"{result.start_time}s-{result.end_time}s | {result.task}: {result.final_label}"
)
except Exception as e:
print(f"Error: {e}")
finally:
channel.close()
Deepfake Detection Streaming
This is an example app that streams an audio file and displays the deepfake detection results:
import grpc
from pydub import AudioSegment
import api_pb2 as pb
import api_pb2_grpc as pb_grpc
# Replace these
CLIENT_ID = 12345 # Replace with your actual project ID
API_KEY = "your_api_key" # Replace with your actual API key
AUDIO_FILE = "test.wav" # Replace with your audio file path here
audio = AudioSegment.from_file(AUDIO_FILE)
audio = audio.set_frame_rate(16000).set_channels(1).set_sample_width(2)
channel = grpc.secure_channel(
"streaming.behavioralsignals.com:443", grpc.ssl_channel_credentials()
)
stub = pb_grpc.BehavioralStreamingApiStub(channel)
def requests():
yield pb.AudioStream(
cid=CLIENT_ID,
x_auth_token=API_KEY,
config=pb.AudioConfig(
encoding=pb.AudioEncoding.LINEAR_PCM, sample_rate_hertz=16000
),
)
chunk_size = int(16000 * 0.25 * 2) # 0.25 seconds
for i in range(0, len(audio.raw_data), chunk_size):
chunk = audio.raw_data[i : i + chunk_size]
yield pb.AudioStream(cid=CLIENT_ID, x_auth_token=API_KEY, audio_content=chunk)
try:
# Only difference: DeepfakeDetection instead of StreamAudio
for response in stub.DeepfakeDetection(requests()):
print(f"\n--- Message {response.message_id} ---")
for result in response.result:
print(
f"{result.start_time}s-{result.end_time}s | {result.task}: {result.final_label}"
)
except Exception as e:
print(f"Error: {e}")
finally:
channel.close()
6. Response Format
Each response contains:
message_id
- Incremental counterresult[]
- Array of analysis results
Each result has:
start_time
/end_time
- Segment timingtask
- Analysis type (emotion, gender, deepfake, etc.)final_label
- Primary predictionprediction[]
- All predictions with confidence scores
Updated 22 days ago
What’s Next