It is no secret that diversity is an issue in computer science, a lot has already been said and tried to improve the situation, with little success, and sadly I don’t think there is a silver bullet as this is a complicated problem.
Still it makes sense to challenge the way we think about things: computer-science tends to be dominated by a few communities, which, even though they are quite international, tend to replicate their thought patterns and their preconceptions. I cringe every time someone wants to build another Silicon Valley: one is enough, we need something else.
One enduring pattern is that text is ASCII: a majority of the people working from IT come from a culture whose written language cannot be expressed properly using simply the characters used in modern English yet they build systems were this or that text field cannot contain anything else but English characters. A majority reproducing a pattern that does not suit them as users.
How can you challenge the assumptions on who works in information technology if you cannot even challenge the idea of what text is? In this case, a de facto standard that is only usable by a minority, the fraction of web-sites that are pure ASCII has been falling steadily, yet the number of applications and system that can only properly process ASCII is huge.
I’m certainly not claiming that fixing that particular technical problem would in any way improve the diversity situation, but I have the feeling that the underlying problems are similar: a system that has worked for some time, with a large body of evidence showing that it is broken, an unwillingness to change because this would challenge some core processes and assumptions…