Dockerfile contents:
And finally:
This was very common at one point (b/w 2017-2020?)
A common solution for:
"Could not load file or assembly 'Newtonsoft.Json, Version=4.5.0.0"
was putting the following in the web.config:
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json"
publicKeyToken="30AD4FE6B2A6AEED" culture="neutral"/>
<bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
"After this upgrade, the project database can't be modified using earlier versions of Visual Studio"
FatCow deploys the following "virus" to all its new customers:
https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?name=Exploit%3aHTML%2fIframeRef.R!MTB&threatid=2147781254
There are a number of reasons why you may see the "This message was blocked because its content presents a potential security issue" error in Gmail. Gmail blocks messages that may spread viruses, like messages that include executable files or certain links.
To protect your account from potential viruses and harmful software, Gmail doesn't allow you to attach:
Tip: If you try to attach a document that is too large, your message won't send. Learn more about attachments and file size limits.
To protect your account, Gmail doesn't allow you to attach certain types of files. Gmail often updates the types of files not allowed to keep up with harmful software that is constantly changing.
File types blocked by Gmail are:
.ade, .adp, .apk, .appx, .appxbundle, .bat, .cab, .chm, .cmd, .com, .cpl, .dll, .dmg, .ex, .ex_, .exe, .hta, .ins, .isp, .iso, .jar, .js, .jse, .lib, .lnk, .mde, .msc, .msi, .msix, .msixbundle, .msp, .mst, .nsh, .pif, .ps1, .scr, .sct, .shb, .sys, .vb, .vbe, .vbs, .vxd, .wsc, .wsf, .wsh
Tip: If you're sure the file is safe, you can ask the sender to upload the file to Google Drive. Then send it as a Drive attachment.
<!--
If you are deploying to a cloud environment that has multiple web server instances,
you should change session state mode from "InProc" to "Custom". In addition,
change the connection string named "DefaultConnection" to connect to an instance
of SQL Server (including SQL Azure and SQL Compact) instead of to SQL Server Express.
-->
<?xml version="1.0" encoding="utf-8"?>
<!-- For more information on using web.config transformation visit http://go.microsoft.com/fwlink/?LinkId=125889 -->
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<!--
In the example below, the "SetAttributes" transform will change the value of
"connectionString" to use "ReleaseSQLServer" only when the "Match" locator
finds an attribute "name" that has a value of "MyDB".
<connectionStrings>
<add name="MyDB"
connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True"
xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>
-->
<system.web>
<!--
In the example below, the "Replace" transform will replace the entire
<customErrors> section of your web.config file.
Note that because there is only one customErrors section under the
<system.web> node, there is no need to use the "xdt:Locator" attribute.
<customErrors defaultRedirect="GenericError.htm"
mode="RemoteOnly" xdt:Transform="Replace">
<error statusCode="500" redirect="InternalError.htm"/>
</customErrors>
-->
</system.web>
</configuration>
When I worked on a Federal project I experience international culture more extensively than ever before. The two cultures were Indian and Chinese. The Chinese developer I worked with was annoyed with the Indian tendency to not share knowledge liberally. At first I didn't grok what he was observing. But ever since it stuck in my head and I have tried to observe the differences he maintained.
What I think might be going on is that when you work with someone from your own background, your mind can pick up a hundred clues in behavior that tells you the person is worthy of your sharing knowledge, and here I am not talking about data security, which is another issue altogether (although there might be some implications involved). For instance, whether the other person is capable of understanding at the level of technological discussion at hand. And this isn't always hierarchical, either, but can be more of a "brand" type of jargon (Microsoft vs. Linux terminology, often deliberately obfuscated for branding reasons of the former, although nowadays maybe best put Amazon vs. Google vs Apple). Non-programmers maybe aren't as stingy with their words, but us programmers like to economize and not have to talk when we don't need to.
What these leads up to is, yes, there are differences in working across cultural backgrounds.
I worry that when working remotely, the differences can be exacerbated. If you work physically proximate to someone, you can pick up on non-verbal stuff to equalize the differences in language (maybe not, as body language can differ across cultures). You'd think Indian programmers all speak English and this would not be a barrier, but I've met many Indian programmers who seem incapable of communicating via written word/emailing. I don't understand why that should differ from native English speaking programmers, but empirically it has proven so.
=============
At what point are the very tools that are created, which DEFINE the problems at hand, the result of a cultural viewpoint? Take GitHub. Or perhaps any security protocol, that divvies out permissions in a predefined grouping? Those predefined groups (see, for instance, the default, built-in groupings for any platform).... are those representative of a culture?
The smart phone brought us the death of alt-tab (multiple windows) and eventually (I've seen the writing on the wall for a decade) of copy-paste. Both are driven by pecuniary interests (the first from the attention economy, the second from the power of intermediating whatever is chosen to be , ahem, "shared").
Also: the URL behind a link, not visible on most smart phone interfaces.
Increasingly it becomes clear that limit to working memory seems to have an abrupt threshold at a single day. In this sense, the movie "Memento" is a valuable tool for comprehending this phenomenon. The tactic used to surmount this limit? Documentation.
The second order problem is then, with increasing amounts of documentation, is how to access that content efficiently. If it eventually takes as much as a day's time to find the information needed, you have then reached the peak amount of complexity your system can handle. Or does it?
You could use the tactic of notating, or creating a hyperlink, to each piece of information that took the inordinate amount of time (up to a day) to locate. Now you have a shortlist of links to the information you will need to re-acquire. In practice, this feels very marginal in benefit. If it took a day to locate a single piece of information, it probably would take nearly as much time to internalize that information again to working memory.
===============
The Trust Factor: If you were the original author of the information, there is a much shorter barrier to convincing oneself of the truth of re-accessed information. However, there are many times when I realized that, as elegantly as seemingly understood a concept is, it later turns out to be not necessarily "true". Or, maybe worse, it is no LONGER "true". The underlying phenomena has changed.
I find it dis-spiriting to have to release no longer true information that took a day to acquire. It feels like there is something of value there, and you don't want to discard it wholesale. This temperament probably differs greatly between people. And the people who are more casual in discarding no-longer-true information must necessarily be more "progressive".
=================
A new collision is occurring: It wasn't until the advent of interconnectivity maturing (perhaps as represented by Cloud infrastructure? or just remote shared libraries? ) that SECURITY become the foremost problem of IT (dated to maybe the 2010s). To the extent DOCUMENTATION, which I believe includes the informal, such as EMAILS and REPOSITORY REVISION HISTORY) has crossed paths with security needs. Maybe legal implications too? In 2020s we are seeing EMAIL PURGES, sometimes pretty extreme (monthly?). "Record Retention Policies". How will this impact the limits of complexity? Perhaps in practice, individuals will resort to their own informal means of self-documentation.
Here is a solid example of how age impacts technology usage.
When I first began programming, I would encounter people who might have resisted storing any information on a hard drive because they felt it was unreliable, compared to storing on a piece of paper. Now most of those people were not necessarily technically savvy, but some might have been and their resistance involved things like knowing that a strong magnet could wipe out that information. It is only through constant empirical knowledge, that is, experiencing storing on media and being able to rely on it, even years later, that even a technically savvy person would have achieved comfort with this idea.
For those that entered the profession with "storing information digitally on media" as their comfort baseline, they were able to take advantage of that comfort to empower them to do more powerful things with data, enabling them to analyze more data, more reliably (because empirically electronically stored digital information trumped mistakes made by hand record-keeping) and, especially significant in a competitive economy, much faster. Those that retained reservations about digital storage fell behind, and ultimately, in the dust.
Fast forward a generation. The new normal is becoming to store information "on the cloud". This meant on someone's server. If it was a server you directly maintained, that was no different than storing, for the most part, on a local computer, and if you handled backup operations directly, maybe even on storage media. But soon that "on the cloud" mushroomed to include things like "OneDrive". Now, there are logical, technically savvy reasons to not feel comfortable with this, and not all of them are based on paranoia. In fact, now we are in the realm of not only trusting the laws of physics, but trusting the laws of human behavior, as we have to trust other unknown people doing reliable backups and using proper data protection protocols (although maybe even imagining these functions are still in the realm of human behavior and not automation is its own kind of non-savvy backwardness). Yet even in the previous generation, there is an equivalent: trusting that the device you were relying on, which essentially was built by another unknown human, tested by another unknown human, sold and packaged by another unknown human, etc) worked as intended.
======
It feels different to trust putting files on the company's network servers (accountability, you can ask for someone in server admin's head to roll?) than on outside the company, on Microsoft's Cloud.
Also, file access has long had its own idiosyncrasies that I have learned and have not changed in a generation or more. Cloud file access seems to have UI changes that change in an instant.
SharePoint misery memory impacts this. SharePoint has left distaste in so many people and I have judged the root cause of this is having access permissions managed by end-users that are not tech-savvy. How is OneDrive different, ultimately, in this?