GitXplorerGitXplorer
r

crystal

public
6 stars
0 forks
0 issues

Commits

List of commits on branch master.
Unverified
6260176deafdef5a608af2c4e246fa4b143ce324

Install Bonjour headers in Linux build.

rrandymarsh77 committed 10 hours ago
Unverified
6744b47382cd38223f3a669aef157db28ba3498e

Provide a Scope to stop recording.

rrandymarsh77 committed 10 hours ago
Unverified
70cd4dea6803794d1a88745ee37da8d93eb71a59

Fix a number of regressions.

rrandymarsh77 committed 2 days ago
Unverified
27f2280a058e367b274e425647a782beabcaf0da

Make players Sendable.

rrandymarsh77 committed 12 days ago
Unverified
8b4ee6975c2d67346713328ad5e784cbefa070f8

Update to Swift 6.

rrandymarsh77 committed 16 days ago
Unverified
ff8af1f1cdf2e8f0df8c46b99c87465ed852093a

Add IAudioSource with microphone and file sources.

rrandymarsh77 committed a year ago

README

The README file for this repository.

Crystal

Low latency, synchronized, live audio streaming in Swift. Use with Soundflower or any other audio input source.

license GitHub release SPM Build Status codebeat badge

Why

Think Sonos but without all the expensive equipment or restricted audio sources. More like Airfoil and/or/combined with the other software from RogueAmoeba.

Status

Works. See the example for consuming this library in code. Or, for a deployable home audio solution, check out Amethyst.

Approaching lower latency. Currently, system latency is set to 400ms but can work at 50ms if the network and target devices can support it. Total e2e latency is closer to 1s. System latency being time from record input callback to speaker and e2e being from time output is fed to Soundflower to speaker. Next step on this front is adaptive latency optimized by aggregated ping results. Then, internalizing a Soundflower driver to cut out some of the redundancy.

Getting Started

  • swift package init --type executable
  • swift build

Commandline targets

  • Write code.
  • swift build

Xcode applications

  • swift package generate-xcodeproj
  • Link CAsync to CoreFoundation
  • Add a 'Copy Files' build phase, set the destinaton to 'Frameworks', add all the dependent products frameworks.
  • Write code.

Future / Roadmap

Some of the following are at least partially beyond the scope of Crystal, but are encompassed by Amethyst.

Add "networking", network discovery, multipeer/bluetooth.

Opt-in authenticated and encrypted streams with once tokens.

Multiple simultaneous (on-demand) format outputs.

Linux support for rPi.

Auto-configuring surround sound with OpenAL and iBeacons.

Video streams, synchronized with audio playback.

On Latency

I originally implemented file segmentation similar to HLS (because it worked "out of the box" on a rPi), and discovered first hand how terrible of a solution for low latency live streaming this was. Immediately, latency suffers from the length of the file segment. Currently, astreams latency pipeline is [any delay in SoundFlower]->[any delay in AudioQueue recording]->[network latency]->[buffering]. Given the excellent (and growing) support for HLS, it is still a more scalable solution if you can afford the latency hit.

On Networking

The goal with "networking" is to enable clients to always be on and servers to go on and offline. Clients need to discover servers and visa versa. Media source(s), output(s), routing, controls, metadata should all be configurable as separate network nodes. Configuration, authentication, metadata should happen over HTTP. Data transfer should happen over authenticated sockets. For example, one computer has a music library and is exposing itself as a source. A phone sees some options to start and control playback. The rPi detects that it should receive and play audio. The phone changes the volume, skips a track, changes the configuration of which clients are recieving audio, etc. Joe Hacker can see one of these networks when peeping the WiFi/public api, but he can't control playback, and he can't initiate a client socket connection that results in payload data transfer.