Shared Network [Community Bounty=UNDER_REVIEW]

Status: Under Review


To connect to the Nimiq Network, both the Safe and Hub are including an iframe in the page from This iframe starts a Nimiq browser node (client) that connects to the network. The Safe and Hub then communicate with this iframe client via RPC to update the users’ balances, listen for and send transactions. Both the Safe and Hub, and any other apps that use the Network iframe, create their own instance of the iframe and thus of the Nimiq client within.

To reduce the amount of Nimiq nodes running at the same time in a browser, and to avoid having to re-establish consensus in each Network iframe, we like to add a method of communication between iframe instances that enables sharing of consensus between multiple users of the Network iframe.

Suggested Solution

A promising idea is to use most browsers’ built-in Broadcast Channel API:

The Broadcast Channel API allows simple communication between browsing contexts (that is windows, tabs, frames, or iframes) with the same origin (usually pages from the same site). - MDN

Because the Network iframes are included from the same URL origin, it is possible for these iframes to communicate directly with each other via a Broadcast Channel.

New iframe instances would detect other, already running instances of the same iframe and would relay all requests to the iframe that already has consensus.

A deterministic means of communication between iframe instances in the channel would have to be developed so that no confusion is created as to which iframe sends and receives broadcast messages.


Network iframe source: (Only v2 of the Network iframe API needs to support this communication channel).

Broadcast Channel API docs:

For ideas for cross-window communication and message formats, refer to the Nimiq RPC library:

The RPC library can also be extended to support the use of broadcast channels, however to not increase the size of the RPC library for all other purposes, the channel communication library should maybe extend the RPC classes instead.

Completion Criteria

The proposal is completed when

  • Two websites include the Network iframe, but only one Nimiq node is started and the second iframe uses the consensus of the first iframe.
  • The code is readable and organized so that it can be accepted into the Network repository.
  • If possible, the Network iframe API does not change.


1’000’000 NIM


How can we go about testing this? I’ve started my own fork of the repo at


I have added a demo page for the Network Bounty in the source network repo (linked in the OP). This demo is meant as a starting point for testing network events and a balance-check method. You are welcome to adapt the demo to test more things!

The instructions to run the demo for development are as follows:

  1. Run yarn && yarn build or npm install && npm run build in both the root and the client folder (because the demo needs a NetworkClient build as well).
  2. Edit nimiq-dist/v2/index.html and remove the hash from the included script src, so that the line looks like this: <script src="network.js"></script>
  3. Start the Rollup bundler in watch mode for automatic rebuilds while developing: yarn rollup -c -w (or npm run rollup -c -w)
  4. Start your dev HTTP server in the root of the repository.
  5. Point your browser to http://localhost:<dev port>/demos/broadcast/ to run the demo
  6. Run the demo in a second tab to test shared networking

@Chugwig You will need to rebase your master branch onto the original master branch (or merge it into yours).

Happy coding!

1 Like

Thanks for your help! Spent the day testing and ironing out the bugs as well as adding new features. I feel that the bounty is complete and look forward to feedback:



I’m very sorry the review takes longer than expected. But it’s not forgotten!
Just clicked the “demo” link and kept duplicating and closing some tabs… it seemed then that connecting to the “other tab’s consensus” stopped working at some point. In got stuck at this:

The console log show no errors, seems like it just didn’t get update messages from the “consensus host” (how do we call all this? :smiley:)

I was wondering about the time it took to connect through the other host - and also saw your comment in the code that the performance can be improved. Have you experimented with that? What would be realistic numbers? (3 seconds might be slower than just doing a new pico consensus)

Thanks for your work. Let me know if I can help or you need further feedback. FYI, I also tested it here: - and I’d say it’s technically sound. :wink:

If you can find a surefire way to reproduce the issue you’re seeing I can look into it.

As for bringing down the 3 seconds, I experimented a little with it when originally working on it but nothing too serious. All I know is it can come down lower, no idea how low it can go.

Feedback for Chugwig’s implementation and the further work on it is now being tracked in this PR: