Can "timer zero" events be injected BEFORE any already buffered messages?

Now if we added a “call timer”, which does not have “retrigger” behavior, but executes the timer for every invocation, you will still have a race condition in the demo project, and you would still see ga=2 occasionally: there is no guarantee when the timer will be executed (because it’s in parallel). On a multi-core processor, it might be executed truely in parallel. On a single-processor, it’s likely to be scheduled, and possibly executed after one (or more) incoming MIDI messages get processed. On closer inspection, I think that’s what’s happening in your demo project, too.

Its clear this this is not a bug, but an artifact of the initial BOME design.
However in my use-cases MIDI is just not competitive for high-performance cases (Scratching). As such I would gadly pay for a single threaded version that prioritizes zero-timers above processing new queued messages.

Pedro, Is this only an issue with incoming 14 bit values?

What is the behavior you want with 2 incoming CC's

1) Every 14 bit CC value increments the counter just once?
Only the first iteration increments the counter?
Only the second iteration increments the counter?

2) Are there similar situations needed (ie for note-on note off combinations coming in quickly)

3) Each incoming message is queued and increments the counter (I don't think this is what you want)

Is it important to not drop either incoming message or are you simply wanting to count them?

Rather than focus on design change, I'm trying to figure out if there is a way to make this work for you under the current design. I'll let Florian decide if he wants to make a design change while I focus on trying to meet your immediate needs using the current design.

Steve

the demo was a pure mockup to make you and Florian see the issue on your own hardware. In the real project I’m not breaking 14-bit CCs or just counting messages.

What i’m actually doing is using dozens of timer-zeros to approximate “a call subroutine” semantics. This is recursive by nature. In general it works, until messages arrive from the hardware too fast, or the specific thread CPU gets a slowdown.

This leads to my concrete request for such experimental option: anytime you you going to deque work, peek the queue and select the first timer-zero event of the queue. If there is no timerzero event, dequeue the first event as normal.

Thanks Pedro!
Since what you are asking is a design change. I will let Florian handle the request. If you are looking for a workaround, I might be able to help with some concrete examples of situations you want me to look at like you showed in your illustration.

I think you understand the MT Pro does not have the concept of a “function call” so indeed they can be partially implemented using timers and other techniques. For your example on incoming CC, I can certainly provide a mechanism to catch and count only the first iteration (until the timer completes). I might even be able to count only the last iteration. But if this doesn’t help with the big picture, it may be of limited value.

One observation is that you are incrementing the value within the initial translator rather than with the timer, I would recommend, in most cases the increment/decrement happen within the timer itself. You can also set a “busy” flag in the initial translator and have the timer clear it when done so that essentially you ignore incoming actions within the first translator if the busy flag is still set.

Steve Caldwell
Bome Q and A Moderator and
Independent Bome Consultant/Specialist
bome@sniz.biz

Hi Pedro, a “call timer” will be easy to add, i.e. ensure that every outgoing “call timer” will result in exactly one incoming timer event. But when that event is executed is still up to the engine…

Your idea of prioritizing 0-timers will not work: they are going through a different queue than incoming MIDI messages, which is yet a different queue for normal timers… and it would not fix the issue, but only make it less likely to occur.

Still the same issue. But now made a workaround that reduces the impact.
.
When you see events being trigered in an undesired order, delay the rule that should execute last by adding an intermediate timer (for that rule only).

.
.
BEFORE:
-------
MIDI -> Rule_1 -> Timer_A (0ms)
Timer_A -> Rule_2
Timer_A -> Rule_3

Most of the time Rule2 is executed before Rule3, but not always.

AFTER:
------
MIDI -> Rule_1 -> Timer_A (0ms)
Timer_A -> Rule_2
Timer_A -> Timer_B (30ms)
Timer_B -> Rule_3

This way the situation is significanlty improved

Hmm, not sure I understand so for a change, I might reach out to you later for help if I run into the situation.

(redacted)

you are welcome. I’ve put a small example of the idea on the comment itself

related, about the “engine queue overflow” message:

From Florian:
This can happen during "normal" operation, it means that the project processed too many realtime events at once. The realtime queue has 400 entries. If it overflows, there is almost always a problem with the project. However, an action like AppleScript could hold up further processing. Specifying a minimal delay of 1ms or so for such actions will schedule it as non-realtime event, which is a queue that grows as needed.

Looking at the sequence (slightly reduced and annotated from the OP):

1 -a- MIDI IN 0.1 [DDJ-SZ]: 91 6A 7F
2 -a- RULE 0.1:1 assignment: (gb=oo) = 106
3 -a- MIDI IN 0.2 [DDJ-SZ]: 91 6A 00
4 -b- OUT 0.1 One-shot timer "process_it": 0 ms delay
5 -a- RULE 0.2:1 assignment: (gb=oo) = 106  <<<<<<<< expected after line 7
6 -c- IN 0.4 On timer "process_it"
7 -c- RULE 0.4:1 expression: (gb=gb+1) = 107 
8 -b- OUT 0.2 One-shot timer "process_it": 0 ms delay
9 -c- IN 0.4 On timer "process_it"
10 -c- RULE 0.4:1 expression: (gb=gb+1) = 108 <<<<<<<< expected to be 107

A couple of observations:

  • I believe that you’d like lines 6 and 7 to be executed directly after 4, correct? Line 3 and 5 should really come after.
  • as above, I believe that your preset is behaving as designed. There is no guarantee that the first “process_it” invocation is completed before the next incoming MIDI message is getting processed.
  • in particular, there are 3 things happening in parallel here, there are 3 threads:
    a) MIDI IN “DDJ-SZ”
    b) queued outgoing realtime events
    c) queued incoming events
    I’ve marked the lines above with the thread letters.

The -b- thread is a relic, and the MT Pro implementation should be updated to not use the -b- thread here (there is system of comparing incoming vs. outgoing context to decide if an outgoing action is executed asynchronously or synchronously). That timer can be fired perfectly fine from the -a- thread directly (synchronously). Now if MT Pro gets changed to that effect, we may still end up with a sequence that you don’t want:

1 -a- MIDI IN 0.1 [DDJ-SZ]: 91 6A 7F
2 -a- RULE 0.1:1 assignment: (gb=oo) = 106
3 -a- OUT 0.1 One-shot timer "process_it": 0 ms delay
4 -a- MIDI IN 0.2 [DDJ-SZ]: 91 6A 00
5 -c- IN 0.4 On timer "process_it"
6 -a- RULE 0.2:1 assignment: (gb=oo) = 106  <<<<<<<< expected after line 7
7 -c- RULE 0.4:1 expression: (gb=gb+1) = 107 
8 -a- OUT 0.2 One-shot timer "process_it": 0 ms delay
9 -c- IN 0.4 On timer "process_it"
10 -c- RULE 0.4:1 expression: (gb=gb+1) = 108 <<<<<<<< expected to be 107

You still have a race for line 7: because it is in thread -c-, it can come before or after line 6, causing success vs. failure for your application.

So I think the main cause for this misleading expectation is the parallelism in MT Pro. Subroutines cannot always be implemented well with 0ms timers, because there is no guarantee when the subroutine handling is completed.

We are considering a new feature for timers: parameters. In the outgoing action, you can then pass a few values along with the timer, which will be fed into the incoming timer. Now if you pass the value of gb to the timer in line 3 and 8, and receive it accordingly in 5 and 9, things should be fine, correct? You may not need the gb variable at all in that case.

1 Like

Very clear. The observed behaviour is a result of the multi-threading architecture.


Agreed, timers with parameters would indeed fix a lot of these “sub-routine” use-cases.
These parameters could appear as local variables in the rules.
At minimum 3x parameters would cover the most comon use cases: channel / note / velocity.

Thanks again!