Technological Solutions and Feasibility

Why can't social media companies just build better algorithms to detect bad content? Why can't we just give everyone a laptop and solve issues of internet access?

These are questions you may be asking yourself as you think about solutions to the topics of this committee. Let's talk about their answers, and why this committee's topics are not easily solved (because if something is easily solvable... why hasn't it been done yet?).

There are a few reasons that technological solutions can be infeasible or not end up working correctly:

Infeasibility

Sometimes, technological solutions rely on technology that just doesn't exist. For example, it is currently technologically infeasible to develop a system to detect bad content in the same way that humans can. The detection of malicious content is already difficult enough for humans, let alone machine learning algorithms. 

The machine learning algorithms used to detect bad content are generally called "classifiers" -- which is to say that they take in some piece of content and tell you whether a piece of content is either "good" or "bad." Most machine learning algorithms rely on what is called training data, which is existing data that has already been classified (in this case, likely sorted into "bad" and "good" content) by a human, to "learn" how to classify content. 

However, in the case of text, this can be really tricky! Let's take the common phrase "break a leg." It is generally meant to wish someone well -- but what if someone meant it maliciously? If someone were to tweet "Go break a leg" at someone else, would you be able to tell whether they meant harm or luck? If it's hard for you, it's even harder for a computer.

This is why it can be difficult to simply "create better algorithms" to detect data. It doesn't help that content policies are heavily context-dependent! This is why many social media companies use a manual review process to review most content.

Technology Doesn't Address a Need

Technology also needs to address a need -- and sometimes, when it doesn't, it can fail.

Take One Laptop Per Child, which you can read more about in our topic synopsis. The program was originally designed to provide $100 laptops loaded with software to communities where technology use was not easy. However, by the time the laptops launched, they cost $200 -- more than some fully-functional laptops -- and many communities did not have the resources to properly teach children how to use them. Teachers were not given instructions, communities didn't have power to charge laptops, and for many users, learning how to use a computer was not their first priority. The laptops also ran their own operating systems, making translation of skills to real computers difficult, and they could only hold a gigabyte of data (the smallest storage size found on computers today is generally 64 gigabytes).

While the concept behind the One Laptop Per Child laptops is appealing, and certainly got lots of funding, it is from its failings that we can learn. When we talk about digital literacy, it also means literacy that can translate to the real world. The technology that OLPC piloted ended up not being useful when cheap laptops from other brands were introduced. And many of the places it was piloted lacked the underlying infrastructure to run them. Think about needs like a pyramid: without stable infrastructure, you cannot have computers, and without computers, you cannot have digital inclusion.

When you think about solutions for our topics, we encourage you to consider the answer to the question of why no one else has implemented your solution yet. Has no one thought of it before, or are there other limitations in place that keep it from being implemented? Can you reduce those barriers, or are they not within the scope of this committee?

Comments

Popular Posts