Compare commits
16 Commits
a5223a01fc
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 8ab422a7aa | |||
|
|
e4f27c012d | ||
|
|
81120ac7ea | ||
|
|
5e06254e1a | ||
|
|
7087d553b0 | ||
|
|
9a53a33341 | ||
|
|
95b01fd436 | ||
|
|
2db40e4547 | ||
| 111f4b347e | |||
|
|
c08ee70fe8 | ||
|
|
7141bd900e | ||
|
|
269da2c569 | ||
|
|
249b1cb210 | ||
|
|
8cf76c1d71 | ||
|
|
da7e1b7276 | ||
|
|
af76717871 |
31
.gitignore
vendored
31
.gitignore
vendored
@@ -1,3 +1,30 @@
|
||||
music
|
||||
.venv
|
||||
music/
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.pyd
|
||||
.Python
|
||||
|
||||
# Virtual envs
|
||||
.venv/
|
||||
venv/
|
||||
ENV/
|
||||
|
||||
# Env files
|
||||
.env
|
||||
.env.*
|
||||
|
||||
# Local config (may contain secrets)
|
||||
config.json
|
||||
|
||||
# Tooling caches
|
||||
.pytest_cache/
|
||||
.mypy_cache/
|
||||
.ruff_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# OS / editor
|
||||
.DS_Store
|
||||
.vscode/
|
||||
|
||||
286
README.md
Normal file
286
README.md
Normal file
@@ -0,0 +1,286 @@
|
||||
# TechDJ Pro
|
||||
|
||||
TechDJ Pro is a local DJ web app with a dual-port architecture:
|
||||
|
||||
- **DJ Panel** (mix/load tracks + start broadcast): `http://localhost:5000`
|
||||
- **Listener Page** (receive the live stream): `http://localhost:5001`
|
||||
|
||||
It supports:
|
||||
- Local library playback (files in `music/`)
|
||||
- Downloading audio from URLs (via `yt-dlp` when available, with fallback)
|
||||
- Live streaming from the DJ browser to listeners using Socket.IO
|
||||
- Live listening via an **MP3 stream** (`/stream.mp3`) generated server-side with **ffmpeg**.
|
||||
- Real-time visual spectrum analyzer for listeners
|
||||
- Optional password protection for the DJ panel
|
||||
- Compatibility with reverse proxies like Cloudflare
|
||||
- **Remote stream relay**: Relay live streams from other DJ servers to your listeners
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
### System packages
|
||||
|
||||
- **Python**: 3.10+ recommended
|
||||
- **ffmpeg**: strongly recommended (required for reliable downloads/transcoding and MP3 fallback)
|
||||
|
||||
Linux (Debian/Ubuntu):
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install -y ffmpeg
|
||||
```
|
||||
|
||||
macOS (Homebrew):
|
||||
|
||||
```bash
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
Windows:
|
||||
- Install ffmpeg from https://ffmpeg.org/download.html
|
||||
- Ensure `ffmpeg` is on your PATH
|
||||
|
||||
### Python dependencies
|
||||
|
||||
All Python dependencies are listed in `requirements.txt`:
|
||||
|
||||
- flask
|
||||
- flask-socketio
|
||||
- eventlet
|
||||
- yt-dlp
|
||||
- python-dotenv
|
||||
- requests
|
||||
|
||||
---
|
||||
|
||||
## Install (from scratch)
|
||||
|
||||
```bash
|
||||
git clone https://git.computertech.dev/computertech/techdj.git
|
||||
cd techdj
|
||||
|
||||
# Create venv
|
||||
python3 -m venv .venv
|
||||
|
||||
# Activate venv
|
||||
source .venv/bin/activate
|
||||
|
||||
# Ensure pip exists (some environments require this)
|
||||
python -m ensurepip --upgrade || true
|
||||
|
||||
# Upgrade tooling (recommended)
|
||||
python -m pip install --upgrade pip setuptools wheel
|
||||
|
||||
# Install deps
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Optional configuration (.env)
|
||||
|
||||
Create a `.env` file in the project root if you want YouTube search results to work in the UI:
|
||||
|
||||
```dotenv
|
||||
YOUTUBE_API_KEY=YOUR_KEY_HERE
|
||||
```
|
||||
|
||||
Notes:
|
||||
- If you don’t set `YOUTUBE_API_KEY`, you can still paste a YouTube URL directly into a deck/download box.
|
||||
|
||||
## Optional DJ panel password (config.json)
|
||||
|
||||
By default, anyone who can reach the DJ server (`:5000`) can open the DJ panel.
|
||||
|
||||
If you want to lock it while you’re playing live, create a `config.json` (not committed) in the project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"dj_panel_password": "your-strong-password"
|
||||
}
|
||||
```
|
||||
|
||||
Behavior:
|
||||
- If `dj_panel_password` is empty/missing, the DJ panel is **unlocked** (default).
|
||||
- If set, visiting `http://<DJ_MACHINE_IP>:5000` shows a login prompt.
|
||||
- Listener (`:5001`) is not affected.
|
||||
|
||||
---
|
||||
|
||||
## Run
|
||||
|
||||
Start the server:
|
||||
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
python server.py
|
||||
```
|
||||
|
||||
You should see output like:
|
||||
|
||||
- DJ panel: `http://localhost:5000`
|
||||
- Listener page: `http://localhost:5001`
|
||||
|
||||
---
|
||||
|
||||
## Using the app
|
||||
|
||||
### DJ workflow
|
||||
|
||||
1. Open the DJ Panel: `http://localhost:5000`
|
||||
2. Click **INITIALIZE SYSTEM**
|
||||
3. Load/play music
|
||||
- Upload MP3s (folder/upload button)
|
||||
- Or download from URLs (paste into deck input / download controls)
|
||||
4. Open the streaming panel and click **START BROADCAST**
|
||||
|
||||
### Remote Stream Relay
|
||||
|
||||
TechDJ can relay live streams from other DJ servers:
|
||||
|
||||
1. Open the DJ Panel: `http://localhost:5000`
|
||||
2. Click the streaming panel (📡 LIVE STREAM)
|
||||
3. In the "Remote Stream Relay" section, paste a remote stream URL (e.g., `http://remote.dj/stream.mp3`)
|
||||
4. Click **START RELAY**
|
||||
5. Your listeners will receive the relayed stream
|
||||
6. Click **STOP RELAY** to end the relay
|
||||
|
||||
### Listener workflow
|
||||
|
||||
1. Open the Listener Page:
|
||||
- Same machine: `http://localhost:5001`
|
||||
- Another device on your LAN/Wi‑Fi:
|
||||
|
||||
`http://<DJ_MACHINE_IP>:5001`
|
||||
|
||||
2. Click **ENABLE AUDIO** if prompted
|
||||
- Browsers block autoplay by default; user interaction is required.3. Enjoy the live stream with real-time spectrum visualization
|
||||
---
|
||||
|
||||
## Multi-device / LAN setup
|
||||
|
||||
### Find your DJ machine IP
|
||||
|
||||
Linux:
|
||||
|
||||
```bash
|
||||
ip addr
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```bat
|
||||
ipconfig
|
||||
```
|
||||
|
||||
macOS:
|
||||
|
||||
```bash
|
||||
ifconfig
|
||||
```
|
||||
|
||||
Use the LAN IP (commonly `192.168.x.x` or `10.x.x.x`).
|
||||
|
||||
### Firewall
|
||||
|
||||
Make sure the DJ machine allows inbound connections on:
|
||||
- TCP `5000` (DJ Panel)
|
||||
- TCP `5001` (Listener)
|
||||
|
||||
If listeners can’t connect, this is often the cause.
|
||||
|
||||
---
|
||||
|
||||
## Streaming
|
||||
|
||||
TechDJ serves the listener audio as an **MP3 HTTP stream**:
|
||||
|
||||
- MP3 stream endpoint: `http://<DJ_MACHINE_IP>:5001/stream.mp3`
|
||||
|
||||
This requires `ffmpeg` installed on the DJ/server machine.
|
||||
|
||||
### Debug endpoint
|
||||
|
||||
- Stream debug JSON:
|
||||
|
||||
`http://<DJ_MACHINE_IP>:5001/stream_debug`
|
||||
|
||||
This shows whether ffmpeg is running and whether MP3 bytes are being produced.
|
||||
|
||||
---
|
||||
|
||||
## Deployment behind reverse proxies (e.g., Cloudflare)
|
||||
|
||||
TechDJ is compatible with reverse proxies like Cloudflare. To ensure proper functionality:
|
||||
|
||||
- Use same-origin URLs for streaming to avoid port restrictions.
|
||||
- Configure your proxy to bypass caching for the `/stream.mp3` endpoint, as it's a live audio stream.
|
||||
- Set cache control headers to `no-cache` for `/stream.mp3` to prevent buffering issues.
|
||||
- Ensure WebSocket connections (used by Socket.IO) are allowed through the proxy.
|
||||
|
||||
Example Cloudflare page rule:
|
||||
- URL: `yourdomain.com/stream.mp3`
|
||||
- Cache Level: Bypass
|
||||
- Edge Cache TTL: 0 seconds
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Listener says “Browser blocked audio”
|
||||
|
||||
- Click **ENABLE AUDIO**
|
||||
- Try a normal click (not keyboard-only)
|
||||
- Disable strict autoplay blocking for the site, if your browser supports it
|
||||
|
||||
### Listener says “NotSupportedError”
|
||||
|
||||
- Your browser likely doesn’t support the default WebM/Opus MediaSource path
|
||||
- Ensure `ffmpeg` is installed on the server
|
||||
- Try opening the MP3 fallback directly:
|
||||
|
||||
`http://<DJ_MACHINE_IP>:5001/stream.mp3`
|
||||
|
||||
### DJ says broadcast is live but listeners hear nothing
|
||||
|
||||
- Confirm:
|
||||
- A deck is actually playing
|
||||
- Crossfader isn’t fully on the silent side
|
||||
- Volumes aren’t at 0
|
||||
- Check `http://<DJ_MACHINE_IP>:5001/stream_debug` and see if `transcoder_bytes_out` increases
|
||||
### Spectrum visualizer not showing
|
||||
|
||||
- Ensure the listener page is loaded and audio is enabled.
|
||||
- Check browser console for errors related to Web Audio API.
|
||||
|
||||
### Remote relay not working
|
||||
|
||||
- Ensure the remote stream URL is accessible and returns valid audio
|
||||
- Check that `ffmpeg` is installed and can handle the remote stream format
|
||||
- Verify the remote stream is MP3 or a format supported by ffmpeg
|
||||
- Check server logs for ffmpeg errors when starting the relay
|
||||
|
||||
### `pip` missing inside venv
|
||||
|
||||
Some Python installs create venvs without pip. Fix:
|
||||
|
||||
```bash
|
||||
python -m ensurepip --upgrade
|
||||
python -m pip install --upgrade pip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dev notes
|
||||
|
||||
- Main server: `server.py`
|
||||
- Client UI logic: `script.js`
|
||||
- Downloader: `downloader.py`
|
||||
- Static assets: `index.html`, `style.css`
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
No license specified.
|
||||
Binary file not shown.
3
config.example.json
Normal file
3
config.example.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"dj_panel_password": ""
|
||||
}
|
||||
@@ -92,11 +92,27 @@ def download_mp3(url, quality='320'):
|
||||
# Prefer yt-dlp for YouTube because it can actually control MP3 output bitrate.
|
||||
if _can_use_ytdlp():
|
||||
try:
|
||||
print(f"⬇️ Downloading via yt-dlp @ {quality_kbps}kbps...")
|
||||
print(f"✨ Using yt-dlp (preferred method)")
|
||||
print(f"⬇️ Downloading @ {quality_kbps}kbps...")
|
||||
return _download_with_ytdlp(url, quality_kbps)
|
||||
except Exception as e:
|
||||
# If yt-dlp fails for any reason, fall back to the existing Cobalt flow.
|
||||
print(f"⚠️ yt-dlp failed, falling back to Cobalt: {e}")
|
||||
print(f"⚠️ yt-dlp failed, falling back to Cobalt API: {e}")
|
||||
else:
|
||||
# Check what's missing
|
||||
has_ffmpeg = shutil.which("ffmpeg") is not None
|
||||
has_ytdlp = False
|
||||
try:
|
||||
import yt_dlp # noqa: F401
|
||||
has_ytdlp = True
|
||||
except:
|
||||
pass
|
||||
|
||||
if not has_ffmpeg:
|
||||
print("⚠️ ffmpeg not found - using Cobalt API fallback")
|
||||
if not has_ytdlp:
|
||||
print("⚠️ yt-dlp not installed - using Cobalt API fallback")
|
||||
print(" 💡 Install with: pip install yt-dlp")
|
||||
|
||||
try:
|
||||
# Use Cobalt v9 API to download
|
||||
|
||||
12
index.html
12
index.html
@@ -398,6 +398,16 @@
|
||||
<span class="quality-hint">Lower = more stable on poor connections</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="remote-relay-section">
|
||||
<h4>🔗 Remote Stream Relay</h4>
|
||||
<div class="relay-controls">
|
||||
<input type="text" id="remote-stream-url" placeholder="Paste remote stream URL (e.g., http://remote.dj/stream.mp3)" class="relay-url-input">
|
||||
<button class="relay-btn" id="start-relay-btn" onclick="startRemoteRelay()">START RELAY</button>
|
||||
<button class="relay-btn stop" id="stop-relay-btn" onclick="stopRemoteRelay()" style="display: none;">STOP RELAY</button>
|
||||
</div>
|
||||
<div class="relay-status" id="relay-status"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -413,6 +423,8 @@
|
||||
<div class="listener-content">
|
||||
<div class="now-playing" id="listener-now-playing">Waiting for stream...</div>
|
||||
|
||||
<canvas id="viz-listener" width="400" height="100"></canvas>
|
||||
|
||||
<!-- Enable Audio Button (shown when autoplay is blocked) -->
|
||||
<button class="enable-audio-btn" id="enable-audio-btn" style="display: none;"
|
||||
onclick="enableListenerAudio()">
|
||||
|
||||
418
script.js
418
script.js
@@ -1545,8 +1545,74 @@ let isBroadcasting = false;
|
||||
let autoStartStream = false;
|
||||
let listenerAudioContext = null;
|
||||
let listenerGainNode = null;
|
||||
let listenerAnalyserNode = null;
|
||||
let listenerMediaElementSourceNode = null;
|
||||
let listenerVuMeterRunning = false;
|
||||
let listenerChunksReceived = 0;
|
||||
|
||||
function startListenerVUMeter() {
|
||||
if (listenerVuMeterRunning) return;
|
||||
listenerVuMeterRunning = true;
|
||||
|
||||
const draw = () => {
|
||||
if (!listenerVuMeterRunning) return;
|
||||
requestAnimationFrame(draw);
|
||||
|
||||
const canvas = document.getElementById('viz-listener');
|
||||
if (!canvas || !listenerAnalyserNode) return;
|
||||
|
||||
const ctx = canvas.getContext('2d');
|
||||
if (!ctx) return;
|
||||
|
||||
// Keep canvas sized correctly for DPI
|
||||
const dpr = window.devicePixelRatio || 1;
|
||||
const rect = canvas.getBoundingClientRect();
|
||||
const targetW = Math.max(1, Math.floor(rect.width * dpr));
|
||||
const targetH = Math.max(1, Math.floor(rect.height * dpr));
|
||||
if (canvas.width !== targetW || canvas.height !== targetH) {
|
||||
canvas.width = targetW;
|
||||
canvas.height = targetH;
|
||||
}
|
||||
|
||||
const analyser = listenerAnalyserNode;
|
||||
const bufferLength = analyser.frequencyBinCount;
|
||||
const dataArray = new Uint8Array(bufferLength);
|
||||
analyser.getByteFrequencyData(dataArray);
|
||||
|
||||
const width = canvas.width;
|
||||
const height = canvas.height;
|
||||
const barCount = 32;
|
||||
const barWidth = width / barCount;
|
||||
|
||||
ctx.fillStyle = '#0a0a12';
|
||||
ctx.fillRect(0, 0, width, height);
|
||||
|
||||
// Listener uses the magenta hue (matches Deck B styling)
|
||||
const hue = 280;
|
||||
for (let i = 0; i < barCount; i++) {
|
||||
const freqIndex = Math.floor(Math.pow(i / barCount, 1.5) * bufferLength);
|
||||
const value = (dataArray[freqIndex] || 0) / 255;
|
||||
const barHeight = value * height;
|
||||
|
||||
const lightness = 30 + (value * 50);
|
||||
const gradient = ctx.createLinearGradient(0, height, 0, height - barHeight);
|
||||
gradient.addColorStop(0, `hsl(${hue}, 100%, ${lightness}%)`);
|
||||
gradient.addColorStop(1, `hsl(${hue}, 100%, ${Math.min(lightness + 20, 80)}%)`);
|
||||
|
||||
ctx.fillStyle = gradient;
|
||||
ctx.fillRect(i * barWidth, height - barHeight, barWidth - 2, barHeight);
|
||||
}
|
||||
};
|
||||
|
||||
draw();
|
||||
}
|
||||
let currentStreamMimeType = null;
|
||||
|
||||
function getMp3FallbackUrl() {
|
||||
// Use same-origin so this works behind reverse proxies (e.g., Cloudflare) where :5001 may not be reachable.
|
||||
return `${window.location.origin}/stream.mp3`;
|
||||
}
|
||||
|
||||
// Initialize SocketIO connection
|
||||
function initSocket() {
|
||||
if (socket) return socket;
|
||||
@@ -1559,11 +1625,10 @@ function initSocket() {
|
||||
window.location.hostname.startsWith('listen.') ||
|
||||
urlParams.get('listen') === 'true';
|
||||
|
||||
// If someone opens listener mode on the DJ port (e.g. :5000?listen=true),
|
||||
// force the Socket.IO connection to the listener backend (:5001).
|
||||
const serverUrl = (isListenerMode && window.location.port !== '5001' &&
|
||||
!window.location.hostname.startsWith('music.') &&
|
||||
!window.location.hostname.startsWith('listen.'))
|
||||
// If someone opens listener mode on the DJ dev port (:5000?listen=true),
|
||||
// use the listener backend (:5001). For proxied deployments (Cloudflare),
|
||||
// do NOT force a port (it may be blocked); stick to same-origin.
|
||||
const serverUrl = (isListenerMode && window.location.port === '5000')
|
||||
? `${window.location.protocol}//${window.location.hostname}:5001`
|
||||
: window.location.origin;
|
||||
console.log(`🔌 Initializing Socket.IO connection to: ${serverUrl}`);
|
||||
@@ -1599,10 +1664,20 @@ function initSocket() {
|
||||
|
||||
socket.on('broadcast_started', () => {
|
||||
console.log('🎙️ Broadcast started notification received');
|
||||
// Update relay UI if it's a relay
|
||||
const relayStatus = document.getElementById('relay-status');
|
||||
if (relayStatus && relayStatus.textContent.includes('Connecting')) {
|
||||
relayStatus.textContent = 'Relay active - streaming to listeners';
|
||||
relayStatus.style.color = '#00ff00';
|
||||
}
|
||||
});
|
||||
|
||||
socket.on('broadcast_stopped', () => {
|
||||
console.log('🛑 Broadcast stopped notification received');
|
||||
// Reset relay UI if it was active
|
||||
document.getElementById('start-relay-btn').style.display = 'inline-block';
|
||||
document.getElementById('stop-relay-btn').style.display = 'none';
|
||||
document.getElementById('relay-status').textContent = '';
|
||||
});
|
||||
|
||||
socket.on('mixer_status', (data) => {
|
||||
@@ -1617,6 +1692,10 @@ function initSocket() {
|
||||
socket.on('error', (data) => {
|
||||
console.error('📡 Server error:', data.message);
|
||||
alert(`SERVER ERROR: ${data.message}`);
|
||||
// Reset relay UI on error
|
||||
document.getElementById('start-relay-btn').style.display = 'inline-block';
|
||||
document.getElementById('stop-relay-btn').style.display = 'none';
|
||||
document.getElementById('relay-status').textContent = '';
|
||||
});
|
||||
|
||||
return socket;
|
||||
@@ -1782,8 +1861,31 @@ function startBroadcast() {
|
||||
const selectedBitrate = parseInt(qualitySelect.value) * 1000; // Convert kbps to bps
|
||||
console.log(`🎚️ Starting broadcast at ${qualitySelect.value}kbps`);
|
||||
|
||||
const preferredTypes = [
|
||||
// Prefer MP4/AAC when available (broad device support)
|
||||
'audio/mp4;codecs=mp4a.40.2',
|
||||
'audio/mp4',
|
||||
// Fallbacks
|
||||
'audio/webm',
|
||||
'audio/ogg',
|
||||
];
|
||||
const chosenType = preferredTypes.find((t) => {
|
||||
try {
|
||||
return MediaRecorder.isTypeSupported(t);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
|
||||
if (!chosenType) {
|
||||
throw new Error('No supported MediaRecorder mimeType found on this browser');
|
||||
}
|
||||
|
||||
currentStreamMimeType = chosenType;
|
||||
console.log(`🎛️ Using broadcast mimeType: ${currentStreamMimeType}`);
|
||||
|
||||
mediaRecorder = new MediaRecorder(stream, {
|
||||
mimeType: 'audio/webm;codecs=opus',
|
||||
mimeType: currentStreamMimeType,
|
||||
audioBitsPerSecond: selectedBitrate
|
||||
});
|
||||
|
||||
@@ -1906,7 +2008,7 @@ function startBroadcast() {
|
||||
document.getElementById('broadcast-status').textContent = '🔴 LIVE';
|
||||
document.getElementById('broadcast-status').classList.add('live');
|
||||
|
||||
// Notify server
|
||||
// Notify server that broadcast is active (listeners use MP3 stream)
|
||||
if (!socket) initSocket();
|
||||
socket.emit('start_broadcast');
|
||||
socket.emit('get_listener_count');
|
||||
@@ -2063,10 +2165,52 @@ function toggleAutoStream(enabled) {
|
||||
localStorage.setItem('autoStartStream', enabled);
|
||||
}
|
||||
|
||||
// ========== REMOTE RELAY FUNCTIONS ==========
|
||||
|
||||
function startRemoteRelay() {
|
||||
const urlInput = document.getElementById('remote-stream-url');
|
||||
const url = urlInput.value.trim();
|
||||
|
||||
if (!url) {
|
||||
alert('Please enter a remote stream URL');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!socket) initSocket();
|
||||
|
||||
// Stop any existing broadcast first
|
||||
if (isBroadcasting) {
|
||||
stopBroadcast();
|
||||
}
|
||||
|
||||
console.log('🔗 Starting remote relay for:', url);
|
||||
|
||||
// Update UI
|
||||
document.getElementById('start-relay-btn').style.display = 'none';
|
||||
document.getElementById('stop-relay-btn').style.display = 'inline-block';
|
||||
document.getElementById('relay-status').textContent = 'Connecting to remote stream...';
|
||||
document.getElementById('relay-status').style.color = '#00f3ff';
|
||||
|
||||
socket.emit('start_remote_relay', { url: url });
|
||||
}
|
||||
|
||||
function stopRemoteRelay() {
|
||||
if (!socket) return;
|
||||
|
||||
console.log('🛑 Stopping remote relay');
|
||||
|
||||
socket.emit('stop_remote_relay');
|
||||
|
||||
// Update UI
|
||||
document.getElementById('start-relay-btn').style.display = 'inline-block';
|
||||
document.getElementById('stop-relay-btn').style.display = 'none';
|
||||
document.getElementById('relay-status').textContent = '';
|
||||
}
|
||||
|
||||
// ========== LISTENER MODE ==========
|
||||
|
||||
function initListenerMode() {
|
||||
console.log('🎧 Initializing listener mode (MediaSource Pipeline)...');
|
||||
console.log('🎧 Initializing listener mode (MP3 stream)...');
|
||||
|
||||
// UI Feedback for listener
|
||||
const appContainer = document.querySelector('.app-container');
|
||||
@@ -2093,90 +2237,65 @@ function initListenerMode() {
|
||||
|
||||
// AudioContext will be created when user enables audio to avoid suspension
|
||||
|
||||
// Create or reuse audio element to handle the MediaSource
|
||||
// ALWAYS create a fresh audio element to avoid MediaSource/MediaElementSource conflicts
|
||||
// This is critical for page refreshes - you can only create MediaElementSource once per element
|
||||
let audio;
|
||||
|
||||
// Clean up old audio element if it exists
|
||||
if (window.listenerAudio) {
|
||||
// Reuse existing audio element from previous initialization
|
||||
audio = window.listenerAudio;
|
||||
console.log('♻️ Reusing existing audio element');
|
||||
|
||||
// Clean up old MediaSource if it exists
|
||||
if (audio.src) {
|
||||
URL.revokeObjectURL(audio.src);
|
||||
audio.removeAttribute('src');
|
||||
audio.load(); // Reset the element
|
||||
console.log('🧹 Cleaning up old audio element and AudioContext nodes');
|
||||
try {
|
||||
window.listenerAudio.pause();
|
||||
if (window.listenerAudio.src) {
|
||||
URL.revokeObjectURL(window.listenerAudio.src);
|
||||
}
|
||||
} else {
|
||||
// Create a new hidden audio element
|
||||
audio = new Audio();
|
||||
window.listenerAudio.removeAttribute('src');
|
||||
window.listenerAudio.remove(); // Remove from DOM
|
||||
} catch (e) {
|
||||
console.warn('Error cleaning up old audio:', e);
|
||||
}
|
||||
|
||||
// Reset all AudioContext-related nodes
|
||||
if (listenerMediaElementSourceNode) {
|
||||
try {
|
||||
listenerMediaElementSourceNode.disconnect();
|
||||
} catch (e) { }
|
||||
listenerMediaElementSourceNode = null;
|
||||
}
|
||||
if (listenerAnalyserNode) {
|
||||
try {
|
||||
listenerAnalyserNode.disconnect();
|
||||
} catch (e) { }
|
||||
listenerAnalyserNode = null;
|
||||
}
|
||||
if (listenerGainNode) {
|
||||
try {
|
||||
listenerGainNode.disconnect();
|
||||
} catch (e) { }
|
||||
listenerGainNode = null;
|
||||
}
|
||||
|
||||
window.listenerAudio = null;
|
||||
window.listenerMediaSource = null;
|
||||
window.listenerAudioEnabled = false;
|
||||
}
|
||||
|
||||
// Create a new hidden media element.
|
||||
// For MP3 we can use a plain <audio> element.
|
||||
audio = document.createElement('audio');
|
||||
audio.autoplay = false; // Don't autoplay - we use the Enable Audio button
|
||||
audio.hidden = true;
|
||||
audio.muted = false;
|
||||
audio.controls = false;
|
||||
audio.playsInline = true;
|
||||
audio.setAttribute('playsinline', '');
|
||||
audio.style.display = 'none';
|
||||
document.body.appendChild(audio);
|
||||
console.log('🆕 Created new audio element');
|
||||
console.log('🆕 Created fresh media element (audio) for listener');
|
||||
|
||||
// AudioContext will be created later on user interaction
|
||||
}
|
||||
|
||||
// Initialize MediaSource for streaming binary chunks
|
||||
const mediaSource = new MediaSource();
|
||||
audio.src = URL.createObjectURL(mediaSource);
|
||||
|
||||
// CRITICAL: Call load() to initialize the MediaSource
|
||||
// Without this, the audio element won't load the MediaSource until play() is called,
|
||||
// which will fail with "no supported sources" if no data is buffered yet
|
||||
// MP3 stream (server-side) — requires ffmpeg on the server.
|
||||
audio.src = getMp3FallbackUrl();
|
||||
audio.load();
|
||||
console.log('🎬 Audio element loading MediaSource...');
|
||||
|
||||
let sourceBuffer = null;
|
||||
let audioQueue = [];
|
||||
let chunksReceived = 0;
|
||||
let lastStatusUpdate = 0;
|
||||
|
||||
mediaSource.addEventListener('sourceopen', () => {
|
||||
console.log('📦 MediaSource opened');
|
||||
const mimeType = 'audio/webm;codecs=opus';
|
||||
|
||||
if (!MediaSource.isTypeSupported(mimeType)) {
|
||||
console.error(`❌ Browser does not support ${mimeType}`);
|
||||
const statusEl = document.getElementById('connection-status');
|
||||
if (statusEl) statusEl.textContent = '❌ Error: Browser does not support WebM/Opus audio';
|
||||
alert('Your browser does not support WebM/Opus audio format. Please try Chrome, Firefox, or Edge.');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
sourceBuffer = mediaSource.addSourceBuffer(mimeType);
|
||||
sourceBuffer.mode = 'sequence';
|
||||
|
||||
// Kick off first append if data is already in queue
|
||||
if (audioQueue.length > 0 && !sourceBuffer.updating && mediaSource.readyState === 'open') {
|
||||
sourceBuffer.appendBuffer(audioQueue.shift());
|
||||
}
|
||||
|
||||
sourceBuffer.addEventListener('updateend', () => {
|
||||
// Process next chunk in queue
|
||||
if (audioQueue.length > 0 && !sourceBuffer.updating) {
|
||||
sourceBuffer.appendBuffer(audioQueue.shift());
|
||||
}
|
||||
|
||||
// Periodic cleanup of old buffer data to prevent memory bloat
|
||||
// Keep the last 60 seconds of audio data
|
||||
if (audio.buffered.length > 0 && !sourceBuffer.updating && mediaSource.readyState === 'open') {
|
||||
const end = audio.buffered.end(audio.buffered.length - 1);
|
||||
const start = audio.buffered.start(0);
|
||||
if (end - start > 120) { // If buffer is > 2 mins
|
||||
try {
|
||||
sourceBuffer.remove(0, end - 60);
|
||||
} catch (e) {
|
||||
console.warn('Buffer cleanup skipped:', e.message);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.error('❌ Failed to add SourceBuffer:', e);
|
||||
}
|
||||
});
|
||||
console.log(`🎧 Listener source set to MP3 stream: ${audio.src}`);
|
||||
|
||||
// Show enable audio button instead of attempting autoplay
|
||||
const enableAudioBtn = document.getElementById('enable-audio-btn');
|
||||
@@ -2186,67 +2305,19 @@ function initListenerMode() {
|
||||
enableAudioBtn.style.display = 'flex';
|
||||
}
|
||||
if (statusEl) {
|
||||
statusEl.textContent = '🔵 Click "Enable Audio" to start listening';
|
||||
statusEl.textContent = '🔵 Click "Enable Audio" to start listening (MP3)';
|
||||
}
|
||||
|
||||
// Store audio element and context for later activation
|
||||
window.listenerAudio = audio;
|
||||
window.listenerMediaSource = mediaSource;
|
||||
window.listenerMediaSource = null;
|
||||
window.listenerAudioEnabled = false; // Track if user has enabled audio
|
||||
|
||||
// Initialize socket and join
|
||||
initSocket();
|
||||
socket.emit('join_listener');
|
||||
|
||||
let hasHeader = false;
|
||||
|
||||
socket.on('audio_data', (data) => {
|
||||
// We MUST have the header before we can do anything with broadcast chunks
|
||||
const isHeaderDirect = data instanceof ArrayBuffer && data.byteLength > 1000; // Heuristic
|
||||
|
||||
hasHeader = true; // No header request needed for WebM relay
|
||||
|
||||
chunksReceived++;
|
||||
listenerChunksReceived = chunksReceived;
|
||||
audioQueue.push(data);
|
||||
|
||||
// JITTER BUFFER: Reduced to 1 segments (buffered) for WebM/Opus
|
||||
const isHeader = false;
|
||||
|
||||
if (sourceBuffer && !sourceBuffer.updating && mediaSource.readyState === 'open') {
|
||||
if (audioQueue.length >= 1) {
|
||||
try {
|
||||
const next = audioQueue.shift();
|
||||
sourceBuffer.appendBuffer(next);
|
||||
|
||||
// Reset error counter on success
|
||||
if (window.sourceBufferErrorCount) window.sourceBufferErrorCount = 0;
|
||||
} catch (e) {
|
||||
console.error('Buffer append error:', e);
|
||||
window.sourceBufferErrorCount = (window.sourceBufferErrorCount || 0) + 1;
|
||||
|
||||
if (window.sourceBufferErrorCount >= 5) {
|
||||
console.error('❌ Too many SourceBuffer errors - attempting recovery...');
|
||||
const statusEl = document.getElementById('connection-status');
|
||||
if (statusEl) statusEl.textContent = '⚠️ Stream error - reconnecting...';
|
||||
audioQueue = [];
|
||||
chunksReceived = 0;
|
||||
window.sourceBufferErrorCount = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// UI Update (only if audio is already enabled, don't overwrite the enable prompt)
|
||||
const now = Date.now();
|
||||
if (now - lastStatusUpdate > 1000 && window.listenerAudioEnabled) {
|
||||
const statusEl = document.getElementById('connection-status');
|
||||
if (statusEl) {
|
||||
statusEl.textContent = `🟢 Connected - ${chunksReceived} chunks (${audioQueue.length} buffered)`;
|
||||
}
|
||||
lastStatusUpdate = now;
|
||||
}
|
||||
});
|
||||
// No socket audio chunks needed in MP3-only mode.
|
||||
|
||||
socket.on('broadcast_started', () => {
|
||||
const nowPlayingEl = document.getElementById('listener-now-playing');
|
||||
@@ -2257,15 +2328,18 @@ function initListenerMode() {
|
||||
socket.on('stream_status', (data) => {
|
||||
const nowPlayingEl = document.getElementById('listener-now-playing');
|
||||
if (nowPlayingEl) {
|
||||
nowPlayingEl.textContent = data.active ? '🎵 Stream is live!' : 'Stream offline - waiting for DJ...';
|
||||
if (data.active) {
|
||||
const status = data.remote_relay ? '🔗 Remote stream is live!' : '🎵 DJ stream is live!';
|
||||
nowPlayingEl.textContent = status;
|
||||
} else {
|
||||
nowPlayingEl.textContent = 'Stream offline - waiting for DJ...';
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
socket.on('broadcast_stopped', () => {
|
||||
const nowPlayingEl = document.getElementById('listener-now-playing');
|
||||
if (nowPlayingEl) nowPlayingEl.textContent = 'Stream ended';
|
||||
chunksReceived = 0;
|
||||
audioQueue = [];
|
||||
});
|
||||
|
||||
socket.on('connect', () => {
|
||||
@@ -2306,17 +2380,36 @@ async function enableListenerAudio() {
|
||||
}
|
||||
|
||||
// 3. Bridge Audio Element to AudioContext if not already connected
|
||||
if (window.listenerAudio && !window.listenerAudio._connectedToContext) {
|
||||
if (window.listenerAudio) {
|
||||
try {
|
||||
const sourceNode = listenerAudioContext.createMediaElementSource(window.listenerAudio);
|
||||
if (!listenerGainNode) {
|
||||
listenerGainNode = listenerAudioContext.createGain();
|
||||
listenerGainNode.gain.value = 0.8;
|
||||
listenerGainNode.connect(listenerAudioContext.destination);
|
||||
}
|
||||
sourceNode.connect(listenerGainNode);
|
||||
|
||||
if (!listenerAnalyserNode) {
|
||||
listenerAnalyserNode = listenerAudioContext.createAnalyser();
|
||||
listenerAnalyserNode.fftSize = 256;
|
||||
}
|
||||
|
||||
if (!listenerMediaElementSourceNode) {
|
||||
listenerMediaElementSourceNode = listenerAudioContext.createMediaElementSource(window.listenerAudio);
|
||||
}
|
||||
|
||||
// Ensure a clean, single connection chain:
|
||||
// media element -> analyser -> gain -> destination
|
||||
try { listenerMediaElementSourceNode.disconnect(); } catch (_) { }
|
||||
try { listenerAnalyserNode.disconnect(); } catch (_) { }
|
||||
|
||||
listenerMediaElementSourceNode.connect(listenerAnalyserNode);
|
||||
listenerAnalyserNode.connect(listenerGainNode);
|
||||
|
||||
window.listenerAudio._connectedToContext = true;
|
||||
console.log('🔗 Connected audio element to AudioContext');
|
||||
console.log('🔗 Connected audio element to AudioContext (with analyser)');
|
||||
|
||||
// Start visualizer after the graph exists
|
||||
startListenerVUMeter();
|
||||
} catch (e) {
|
||||
console.warn('⚠️ Could not connect to AudioContext:', e.message);
|
||||
}
|
||||
@@ -2341,37 +2434,19 @@ async function enableListenerAudio() {
|
||||
const volValue = volEl ? parseInt(volEl.value, 10) : 80;
|
||||
setListenerVolume(Number.isFinite(volValue) ? volValue : 80);
|
||||
|
||||
// Check if we have buffered data
|
||||
const hasBufferedData = () => {
|
||||
return window.listenerAudio.buffered && window.listenerAudio.buffered.length > 0;
|
||||
};
|
||||
|
||||
// Attempt playback IMMEDIATELY to capture user gesture
|
||||
// We do this before waiting for data so we don't lose the "user interaction" token
|
||||
// MP3 stream: call play() immediately to capture the user gesture.
|
||||
if (audioText) audioText.textContent = 'STARTING...';
|
||||
console.log('▶️ Attempting to play audio...');
|
||||
const playPromise = window.listenerAudio.play();
|
||||
|
||||
// If no buffered data yet, show status but don't block playback
|
||||
if (!hasBufferedData()) {
|
||||
console.log('⏳ Waiting for audio data to buffer...');
|
||||
const chunkCount = Number.isFinite(listenerChunksReceived) ? listenerChunksReceived : 0;
|
||||
if (audioText) {
|
||||
audioText.textContent = chunkCount > 0 ? 'BUFFERING...' : 'WAITING FOR STREAM...';
|
||||
}
|
||||
|
||||
// Start a background checker to update UI
|
||||
const checkInterval = setInterval(() => {
|
||||
if (hasBufferedData()) {
|
||||
clearInterval(checkInterval);
|
||||
console.log('✅ Audio data buffered');
|
||||
const chunkCount = Number.isFinite(listenerChunksReceived) ? listenerChunksReceived : 0;
|
||||
} else if (audioText && chunkCount > 0 && audioText.textContent === 'WAITING FOR STREAM...') {
|
||||
// If not buffered yet, show buffering but don't block.
|
||||
if (!hasBufferedData() && audioText) {
|
||||
audioText.textContent = 'BUFFERING...';
|
||||
}
|
||||
}, 500);
|
||||
} else {
|
||||
console.log('✅ Audio already has buffered data');
|
||||
}
|
||||
|
||||
await playPromise;
|
||||
console.log('✅ Audio playback started successfully');
|
||||
@@ -2406,22 +2481,13 @@ async function enableListenerAudio() {
|
||||
if (error.name === 'NotAllowedError') {
|
||||
errorMsg = 'Browser blocked audio (NotAllowedError). Check permissions.';
|
||||
} else if (error.name === 'NotSupportedError') {
|
||||
errorMsg = 'Format not supported or buffer empty (NotSupportedError).';
|
||||
errorMsg = 'MP3 stream not supported or unavailable (NotSupportedError).';
|
||||
}
|
||||
|
||||
stashedStatus.textContent = '⚠️ ' + errorMsg;
|
||||
|
||||
// If it was a NotSupportedError (likely empty buffer), we can try to recover automatically
|
||||
// by waiting for data and trying to play again (even if it might fail without gesture)
|
||||
if (error.name === 'NotSupportedError') {
|
||||
console.log('🔄 Retrying playback in background once data arrives...');
|
||||
const retryInterval = setInterval(() => {
|
||||
if (window.listenerAudio.buffered && window.listenerAudio.buffered.length > 0) {
|
||||
clearInterval(retryInterval);
|
||||
window.listenerAudio.play().catch(e => console.error('Background retry failed:', e));
|
||||
stashedStatus.textContent = '🟢 Recovered - Playing';
|
||||
}
|
||||
}, 1000);
|
||||
stashedStatus.textContent = '⚠️ MP3 stream failed. Is ffmpeg installed on the server?';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
371
server.py
371
server.py
@@ -3,19 +3,187 @@ import eventlet
|
||||
eventlet.monkey_patch()
|
||||
|
||||
import os
|
||||
from flask import Flask, send_from_directory, jsonify, request, session
|
||||
import json
|
||||
import subprocess
|
||||
import threading
|
||||
import queue
|
||||
import time
|
||||
from flask import Flask, send_from_directory, jsonify, request, session, Response, stream_with_context
|
||||
from flask_socketio import SocketIO, emit
|
||||
from dotenv import load_dotenv
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
import downloader
|
||||
|
||||
|
||||
def _load_config():
|
||||
"""Loads optional config.json from the project root.
|
||||
|
||||
If missing or invalid, returns an empty dict.
|
||||
"""
|
||||
try:
|
||||
with open('config.json', 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
return data if isinstance(data, dict) else {}
|
||||
except FileNotFoundError:
|
||||
return {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
CONFIG = _load_config()
|
||||
DJ_PANEL_PASSWORD = (CONFIG.get('dj_panel_password') or '').strip()
|
||||
DJ_AUTH_ENABLED = bool(DJ_PANEL_PASSWORD)
|
||||
|
||||
# Relay State
|
||||
broadcast_state = {
|
||||
'active': False
|
||||
'active': False,
|
||||
}
|
||||
listener_sids = set()
|
||||
dj_sids = set()
|
||||
|
||||
# === Optional MP3 fallback stream (server-side transcoding) ===
|
||||
# This allows listeners on browsers that don't support WebM/Opus via MediaSource
|
||||
# (notably some Safari / locked-down environments) to still hear the stream.
|
||||
_ffmpeg_proc = None
|
||||
_ffmpeg_in_q = queue.Queue(maxsize=200)
|
||||
_mp3_clients = set() # set[queue.Queue]
|
||||
_mp3_lock = threading.Lock()
|
||||
_transcode_threads_started = False
|
||||
_transcoder_bytes_out = 0
|
||||
_transcoder_last_error = None
|
||||
_last_audio_chunk_ts = 0.0
|
||||
_remote_stream_url = None # For relaying remote streams
|
||||
|
||||
|
||||
def _start_transcoder_if_needed():
|
||||
global _ffmpeg_proc, _transcode_threads_started
|
||||
|
||||
if _ffmpeg_proc is not None and _ffmpeg_proc.poll() is None:
|
||||
return
|
||||
|
||||
if _remote_stream_url:
|
||||
# Remote relay mode: input from URL
|
||||
cmd = [
|
||||
'ffmpeg',
|
||||
'-hide_banner',
|
||||
'-loglevel', 'error',
|
||||
'-i', _remote_stream_url,
|
||||
'-vn',
|
||||
'-acodec', 'libmp3lame',
|
||||
'-b:a', '192k',
|
||||
'-f', 'mp3',
|
||||
'pipe:1',
|
||||
]
|
||||
else:
|
||||
# Local broadcast mode: input from pipe
|
||||
cmd = [
|
||||
'ffmpeg',
|
||||
'-hide_banner',
|
||||
'-loglevel', 'error',
|
||||
'-i', 'pipe:0',
|
||||
'-vn',
|
||||
'-acodec', 'libmp3lame',
|
||||
'-b:a', '192k',
|
||||
'-f', 'mp3',
|
||||
'pipe:1',
|
||||
]
|
||||
|
||||
try:
|
||||
if _remote_stream_url:
|
||||
_ffmpeg_proc = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
bufsize=0,
|
||||
)
|
||||
else:
|
||||
_ffmpeg_proc = subprocess.Popen(
|
||||
cmd,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
bufsize=0,
|
||||
)
|
||||
except FileNotFoundError:
|
||||
_ffmpeg_proc = None
|
||||
print('⚠️ ffmpeg not found; /stream.mp3 fallback disabled')
|
||||
return
|
||||
|
||||
print(f'🎛️ ffmpeg transcoder started for /stream.mp3 ({ "remote relay" if _remote_stream_url else "local broadcast" })')
|
||||
|
||||
def _writer():
|
||||
global _transcoder_last_error
|
||||
while True:
|
||||
chunk = _ffmpeg_in_q.get()
|
||||
if chunk is None:
|
||||
break
|
||||
proc = _ffmpeg_proc
|
||||
if proc is None or proc.stdin is None:
|
||||
continue
|
||||
try:
|
||||
proc.stdin.write(chunk)
|
||||
except Exception:
|
||||
# If ffmpeg dies or pipe breaks, just stop writing.
|
||||
_transcoder_last_error = 'stdin write failed'
|
||||
break
|
||||
|
||||
def _reader():
|
||||
global _transcoder_bytes_out, _transcoder_last_error
|
||||
proc = _ffmpeg_proc
|
||||
if proc is None or proc.stdout is None:
|
||||
return
|
||||
while True:
|
||||
try:
|
||||
data = proc.stdout.read(4096)
|
||||
except Exception:
|
||||
_transcoder_last_error = 'stdout read failed'
|
||||
break
|
||||
if not data:
|
||||
break
|
||||
_transcoder_bytes_out += len(data)
|
||||
with _mp3_lock:
|
||||
clients = list(_mp3_clients)
|
||||
for q in clients:
|
||||
try:
|
||||
q.put_nowait(data)
|
||||
except Exception:
|
||||
# Drop if client queue is full or gone.
|
||||
pass
|
||||
|
||||
if not _transcode_threads_started:
|
||||
threading.Thread(target=_writer, daemon=True).start()
|
||||
threading.Thread(target=_reader, daemon=True).start()
|
||||
_transcode_threads_started = True
|
||||
|
||||
|
||||
def _stop_transcoder():
|
||||
global _ffmpeg_proc
|
||||
try:
|
||||
_ffmpeg_in_q.put_nowait(None)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
proc = _ffmpeg_proc
|
||||
_ffmpeg_proc = None
|
||||
if proc is None:
|
||||
return
|
||||
try:
|
||||
proc.terminate()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _feed_transcoder(data: bytes):
|
||||
global _last_audio_chunk_ts
|
||||
if _ffmpeg_proc is None or _ffmpeg_proc.poll() is not None or _remote_stream_url:
|
||||
return
|
||||
_last_audio_chunk_ts = time.time()
|
||||
try:
|
||||
_ffmpeg_in_q.put_nowait(data)
|
||||
except Exception:
|
||||
# Queue full; drop to keep latency bounded.
|
||||
pass
|
||||
MUSIC_FOLDER = "music"
|
||||
# Ensure music folder exists
|
||||
if not os.path.exists(MUSIC_FOLDER):
|
||||
@@ -138,10 +306,154 @@ def setup_shared_routes(app):
|
||||
print(f"❌ Upload error: {e}")
|
||||
return jsonify({"success": False, "error": str(e)}), 500
|
||||
|
||||
@app.route('/stream.mp3')
|
||||
def stream_mp3():
|
||||
# Streaming response from the ffmpeg transcoder output.
|
||||
# If ffmpeg isn't available, return 503.
|
||||
if _ffmpeg_proc is None or _ffmpeg_proc.poll() is not None:
|
||||
return jsonify({"success": False, "error": "MP3 stream not available"}), 503
|
||||
|
||||
client_q: queue.Queue = queue.Queue(maxsize=200)
|
||||
with _mp3_lock:
|
||||
_mp3_clients.add(client_q)
|
||||
|
||||
def gen():
|
||||
try:
|
||||
while True:
|
||||
chunk = client_q.get()
|
||||
if chunk is None:
|
||||
break
|
||||
yield chunk
|
||||
finally:
|
||||
with _mp3_lock:
|
||||
_mp3_clients.discard(client_q)
|
||||
|
||||
return Response(
|
||||
stream_with_context(gen()),
|
||||
mimetype='audio/mpeg',
|
||||
headers={
|
||||
'Cache-Control': 'no-store, no-cache, must-revalidate, max-age=0',
|
||||
'Connection': 'keep-alive',
|
||||
},
|
||||
)
|
||||
|
||||
@app.route('/stream_debug')
|
||||
def stream_debug():
|
||||
proc = _ffmpeg_proc
|
||||
running = proc is not None and proc.poll() is None
|
||||
return jsonify({
|
||||
'broadcast_active': broadcast_state.get('active', False),
|
||||
'broadcast_mimeType': broadcast_state.get('mimeType'),
|
||||
'ffmpeg_running': running,
|
||||
'ffmpeg_found': (proc is not None),
|
||||
'mp3_clients': len(_mp3_clients),
|
||||
'transcoder_bytes_out': _transcoder_bytes_out,
|
||||
'transcoder_last_error': _transcoder_last_error,
|
||||
'last_audio_chunk_ts': _last_audio_chunk_ts,
|
||||
})
|
||||
|
||||
# === DJ SERVER (Port 5000) ===
|
||||
dj_app = Flask(__name__, static_folder='.', static_url_path='')
|
||||
dj_app.config['SECRET_KEY'] = 'dj_panel_secret'
|
||||
setup_shared_routes(dj_app)
|
||||
|
||||
|
||||
@dj_app.before_request
|
||||
def _protect_dj_panel():
|
||||
"""Optionally require a password for the DJ panel only (port 5000).
|
||||
|
||||
This does not affect the listener server (port 5001).
|
||||
"""
|
||||
if not DJ_AUTH_ENABLED:
|
||||
return None
|
||||
|
||||
# Allow login/logout endpoints
|
||||
if request.path in ('/login', '/logout'):
|
||||
return None
|
||||
|
||||
# If already authenticated, allow
|
||||
if session.get('dj_authed') is True:
|
||||
return None
|
||||
|
||||
# Redirect everything else to login
|
||||
return (
|
||||
"<!doctype html><html><head><meta http-equiv='refresh' content='0; url=/login' /></head>"
|
||||
"<body>Redirecting to <a href='/login'>/login</a>...</body></html>",
|
||||
302,
|
||||
{'Location': '/login'}
|
||||
)
|
||||
|
||||
|
||||
@dj_app.route('/login', methods=['GET', 'POST'])
|
||||
def dj_login():
|
||||
if not DJ_AUTH_ENABLED:
|
||||
# If auth is disabled, just go to the panel.
|
||||
session['dj_authed'] = True
|
||||
return (
|
||||
"<!doctype html><html><head><meta http-equiv='refresh' content='0; url=/' /></head>"
|
||||
"<body>Auth disabled. Redirecting...</body></html>",
|
||||
302,
|
||||
{'Location': '/'}
|
||||
)
|
||||
|
||||
error = None
|
||||
if request.method == 'POST':
|
||||
pw = (request.form.get('password') or '').strip()
|
||||
if pw == DJ_PANEL_PASSWORD:
|
||||
session['dj_authed'] = True
|
||||
return (
|
||||
"<!doctype html><html><head><meta http-equiv='refresh' content='0; url=/' /></head>"
|
||||
"<body>Logged in. Redirecting...</body></html>",
|
||||
302,
|
||||
{'Location': '/'}
|
||||
)
|
||||
error = 'Invalid password'
|
||||
|
||||
# Minimal inline login page (no new assets)
|
||||
return f"""<!doctype html>
|
||||
<html lang=\"en\">
|
||||
<head>
|
||||
<meta charset=\"utf-8\" />
|
||||
<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />
|
||||
<title>TechDJ - DJ Login</title>
|
||||
<style>
|
||||
body {{ background:#0a0a12; color:#eee; font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial; margin:0; }}
|
||||
.wrap {{ min-height:100vh; display:flex; align-items:center; justify-content:center; padding:24px; }}
|
||||
.card {{ width:100%; max-width:420px; background:rgba(10,10,20,0.85); border:2px solid #bc13fe; border-radius:16px; padding:24px; box-shadow:0 0 40px rgba(188,19,254,0.25); }}
|
||||
h1 {{ margin:0 0 16px 0; font-size:22px; }}
|
||||
label {{ display:block; margin:12px 0 8px; opacity:0.9; }}
|
||||
input {{ width:100%; padding:12px; border-radius:10px; border:1px solid rgba(255,255,255,0.15); background:rgba(0,0,0,0.35); color:#fff; }}
|
||||
button {{ width:100%; margin-top:14px; padding:12px; border-radius:10px; border:2px solid #bc13fe; background:rgba(188,19,254,0.15); color:#fff; font-weight:700; cursor:pointer; }}
|
||||
.err {{ margin-top:12px; color:#ffb3ff; }}
|
||||
.hint {{ margin-top:10px; font-size:12px; opacity:0.7; }}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class=\"wrap\">
|
||||
<div class=\"card\">
|
||||
<h1>DJ Panel Locked</h1>
|
||||
<form method=\"post\" action=\"/login\">
|
||||
<label for=\"password\">Password</label>
|
||||
<input id=\"password\" name=\"password\" type=\"password\" autocomplete=\"current-password\" autofocus />
|
||||
<button type=\"submit\">Unlock DJ Panel</button>
|
||||
{f"<div class='err'>{error}</div>" if error else ""}
|
||||
<div class=\"hint\">Set/disable this in config.json (dj_panel_password).</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>"""
|
||||
|
||||
|
||||
@dj_app.route('/logout')
|
||||
def dj_logout():
|
||||
session.pop('dj_authed', None)
|
||||
return (
|
||||
"<!doctype html><html><head><meta http-equiv='refresh' content='0; url=/login' /></head>"
|
||||
"<body>Logged out. Redirecting...</body></html>",
|
||||
302,
|
||||
{'Location': '/login'}
|
||||
)
|
||||
dj_socketio = SocketIO(
|
||||
dj_app,
|
||||
cors_allowed_origins="*",
|
||||
@@ -155,6 +467,9 @@ dj_socketio = SocketIO(
|
||||
|
||||
@dj_socketio.on('connect')
|
||||
def dj_connect():
|
||||
if DJ_AUTH_ENABLED and session.get('dj_authed') is not True:
|
||||
print(f"⛔ DJ socket rejected (unauthorized): {request.sid}")
|
||||
return False
|
||||
print(f"🎧 DJ connected: {request.sid}")
|
||||
dj_sids.add(request.sid)
|
||||
|
||||
@@ -168,11 +483,13 @@ def stop_broadcast_after_timeout():
|
||||
pass
|
||||
|
||||
@dj_socketio.on('start_broadcast')
|
||||
def dj_start():
|
||||
def dj_start(data=None):
|
||||
broadcast_state['active'] = True
|
||||
session['is_dj'] = True
|
||||
print("🎙️ Broadcast -> ACTIVE")
|
||||
|
||||
_start_transcoder_if_needed()
|
||||
|
||||
listener_socketio.emit('broadcast_started', namespace='/')
|
||||
listener_socketio.emit('stream_status', {'active': True}, namespace='/')
|
||||
|
||||
@@ -182,14 +499,58 @@ def dj_stop():
|
||||
session['is_dj'] = False
|
||||
print("🛑 DJ stopped broadcasting")
|
||||
|
||||
_stop_transcoder()
|
||||
|
||||
listener_socketio.emit('broadcast_stopped', namespace='/')
|
||||
listener_socketio.emit('stream_status', {'active': False}, namespace='/')
|
||||
|
||||
@dj_socketio.on('start_remote_relay')
|
||||
def dj_start_remote_relay(data):
|
||||
global _remote_stream_url
|
||||
url = data.get('url', '').strip()
|
||||
if not url:
|
||||
dj_socketio.emit('error', {'message': 'No URL provided for remote relay'})
|
||||
return
|
||||
|
||||
# Stop any existing broadcast/relay
|
||||
if broadcast_state['active']:
|
||||
dj_stop()
|
||||
|
||||
_remote_stream_url = url
|
||||
broadcast_state['active'] = True
|
||||
broadcast_state['remote_relay'] = True
|
||||
session['is_dj'] = True
|
||||
print(f"🔗 Starting remote relay from: {url}")
|
||||
|
||||
_start_transcoder_if_needed()
|
||||
|
||||
listener_socketio.emit('broadcast_started', namespace='/')
|
||||
listener_socketio.emit('stream_status', {'active': True, 'remote_relay': True}, namespace='/')
|
||||
|
||||
@dj_socketio.on('stop_remote_relay')
|
||||
def dj_stop_remote_relay():
|
||||
global _remote_stream_url
|
||||
_remote_stream_url = None
|
||||
broadcast_state['active'] = False
|
||||
broadcast_state['remote_relay'] = False
|
||||
session['is_dj'] = False
|
||||
print("🛑 Remote relay stopped")
|
||||
|
||||
_stop_transcoder()
|
||||
|
||||
listener_socketio.emit('broadcast_stopped', namespace='/')
|
||||
listener_socketio.emit('stream_status', {'active': False}, namespace='/')
|
||||
|
||||
@dj_socketio.on('audio_chunk')
|
||||
def dj_audio(data):
|
||||
# Relay audio chunk to all listeners immediately
|
||||
# MP3-only mode: do not relay raw chunks to listeners; feed transcoder only.
|
||||
if broadcast_state['active']:
|
||||
listener_socketio.emit('audio_data', data, namespace='/')
|
||||
# Ensure MP3 fallback transcoder is running (if ffmpeg is installed)
|
||||
if _ffmpeg_proc is None or _ffmpeg_proc.poll() is not None:
|
||||
_start_transcoder_if_needed()
|
||||
|
||||
if isinstance(data, (bytes, bytearray)):
|
||||
_feed_transcoder(bytes(data))
|
||||
|
||||
# === LISTENER SERVER (Port 5001) ===
|
||||
listener_app = Flask(__name__, static_folder='.', static_url_path='')
|
||||
|
||||
83
style.css
83
style.css
@@ -1380,7 +1380,8 @@ input[type=range] {
|
||||
}
|
||||
|
||||
#viz-A,
|
||||
#viz-B {
|
||||
#viz-B,
|
||||
#viz-listener {
|
||||
height: 80px !important;
|
||||
}
|
||||
|
||||
@@ -2319,6 +2320,75 @@ input[type=range] {
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
/* Remote Relay Section */
|
||||
.remote-relay-section {
|
||||
padding: 15px;
|
||||
background: rgba(0, 0, 0, 0.3);
|
||||
border-radius: 8px;
|
||||
border: 1px solid rgba(0, 243, 255, 0.3);
|
||||
}
|
||||
|
||||
.remote-relay-section h4 {
|
||||
margin: 0 0 15px 0;
|
||||
color: var(--primary-cyan);
|
||||
font-family: 'Orbitron', sans-serif;
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
.relay-controls {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.relay-url-input {
|
||||
padding: 10px;
|
||||
background: rgba(0, 0, 0, 0.5);
|
||||
border: 1px solid var(--primary-cyan);
|
||||
color: var(--text-main);
|
||||
border-radius: 5px;
|
||||
font-family: 'Rajdhani', monospace;
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
|
||||
.relay-btn {
|
||||
padding: 12px;
|
||||
background: linear-gradient(145deg, #1a1a1a, #0a0a0a);
|
||||
border: 2px solid var(--primary-cyan);
|
||||
color: var(--primary-cyan);
|
||||
font-family: 'Orbitron', sans-serif;
|
||||
font-size: 0.9rem;
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
border-radius: 5px;
|
||||
transition: all 0.3s;
|
||||
box-shadow: 0 0 15px rgba(0, 243, 255, 0.2);
|
||||
}
|
||||
|
||||
.relay-btn:hover {
|
||||
background: linear-gradient(145deg, #2a2a2a, #1a1a1a);
|
||||
box-shadow: 0 0 25px rgba(0, 243, 255, 0.4);
|
||||
transform: translateY(-1px);
|
||||
}
|
||||
|
||||
.relay-btn.stop {
|
||||
border-color: #ff4444;
|
||||
color: #ff4444;
|
||||
box-shadow: 0 0 15px rgba(255, 68, 68, 0.2);
|
||||
}
|
||||
|
||||
.relay-btn.stop:hover {
|
||||
background: rgba(255, 68, 68, 0.1);
|
||||
box-shadow: 0 0 25px rgba(255, 68, 68, 0.4);
|
||||
}
|
||||
|
||||
.relay-status {
|
||||
margin-top: 10px;
|
||||
font-size: 0.85rem;
|
||||
color: var(--text-dim);
|
||||
min-height: 20px;
|
||||
}
|
||||
|
||||
/* ========== LISTENER MODE ========== */
|
||||
|
||||
.listener-mode {
|
||||
@@ -2413,6 +2483,12 @@ input[type=range] {
|
||||
backdrop-filter: blur(10px);
|
||||
}
|
||||
|
||||
#viz-listener {
|
||||
width: 100%;
|
||||
display: block;
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
.now-playing {
|
||||
text-align: center;
|
||||
font-family: 'Orbitron', sans-serif;
|
||||
@@ -2559,6 +2635,11 @@ input[type=range] {
|
||||
.volume-control label {
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
#viz-listener {
|
||||
height: 60px !important;
|
||||
margin: 15px 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* Hide landscape prompt globally when listening-active class is present */
|
||||
|
||||
Reference in New Issue
Block a user