01
Mar
2017
Implication of open source .Net core

Microsoft’s effort to make .NET open source is an effort to encourage sharing bug fixes, encourage sharing of documentation and libraries. Now, .NET could compete on a perfectly level playing field.

Microsoft has released .NET Core as open source. The decision to make it open source is an effort to encourage documentation sharing, sharing of bug fixes and libraries. Primarily, the move is aimed at outcompeting Java. There’s no benefit to Windows and as a matter of fact, their own Windows Server sales. .NET could now compete on a perfectly level playing field.

NET Core Architecture

IMPLICATION OF .NET CORE OPEN SOURCE

With a growing array of consumer and enterprise needs, custom software has taken precedence than out-of-the-box solutions because they do not address individual needs while maintaining scalability minus the big price. New technologies provide the promise of enhanced productivity and improved competitiveness, but present logistical challenges as well. The move of open sourcing .NET to take it cross platform means shifting to a modular design that Microsoft could develop in an agile manner. This means a better .NET indeed. However, making sense of the technology means thinking on both the new technology and the strategy behind it.

WHY YOU NEED .NET CORE AND WHAT YOU GET FROM IT

A dozen years or so since the release of the first .NET, developers ended up with fragmented, multiple versions of the framework for various platforms. From the .NET Compact Framework to Silverlight, Windows Store and Windows Phone apps, each time Microsoft has taken .NET to a new platform, the ‘common’ language supposedly ended up with a different subset, which is: every time, there is a different runtime, application and framework model, with various development being done at every layer and APIs, which go back to a common code base but have not stayed common all the time.

MORE MICROSOFT CONTROL

Sure, various platforms always are going to have different capabilities and features, but as .NET becomes open source and spreads beyond platforms that Microsoft controls, having a common core instead of a set of loosely coupled subsets gets even more important. This is the basis for .NET Core and the open source strategy of the Microsoft framework.

Microsoft attempted tackling the concern before, with shard projects and portable class libraries to at least enable grouping the code for numerous .NET versions together and sharing what could be shared, and universal applications that also organize the shared code to make adding per platform code easier. Both are based on the contracts concept, which cover one pre-defined area of the APIs and should be thoroughly supported on a platform, or not at all. Those were introduced confusingly in the Windows 8 time frame, but not the same as the contracts WinRT applications use to access file sharing or picker. They are a way of abstracting APIs so they could be used as if they were the same for every platform.

SOME UPDATES OF .NET CORE

  • Languages would be enhanced with throw expressions and binary literals
  • Pattern and tuples matching, together with other languages
  • Tooling from xproj/project.json system to .csproj/MSBuild
  • Accommodation of ARM 32/64 processors for Linux and Windows

Probably the most interesting upgrade is the expected update of F#. The ‘functional first’ language of Microsoft would be updated with full .NET core support later this year. The update would have a better IDE and take benefit of using tuples. Already, F# boasts having features such as syntax, autocomplete and document formatting. It is not a surprise to see that they would also add support for ‘fixed’ keywords and annotations.

THE OPPORTUNITY OF .NET CORE

The .NET core technology makes sense for a cross-platform environment. That is why this is a Microsoft technique that developers should pay attention to. Some more pessimistic feedback regarding the .NET core plans have the tendency to underestimate changes scope or assume a sort of idealized past when .NET was made as a more modular cross-platform stack all along. Open sourcing the framework does not simply replicate Mono or fill in Mono’s holes. Those are only initial opportunities. Mono is beginning to use the ‘reference source’, which is available for the framework, but in the long term would take .NET core.

.NET core is Mono’s componentized version and of the full framework that works with Mono. It would in time be the de facto cross platform implementation for stacks. A lot of the benefits of open sourcing the .NET framework go far more than making code available or getting it into other platforms even, that is what is driving the decision of Microsoft to open source it. Microsoft showed an unprecedented transparency level and willingness to take feedback about the framework.

Microsoft is ready to make .NET core the future by making it open source. The way it is being built is very significant to reach to a wider market.

14
Dec
2016
The when, where and why to use NoSQL

There are many instances of when, where and why one should use NoSQL. The two major attributes of NoSQL is the flexibility and scalability and has drawn a lot of attention and experimentation recently.

Relational databases and SQL Server have been the go-to databases for more than twenty years. Nonetheless, the rising need to process higher varieties and volumes of data at a fast pace has changed the nature of data storage needs for app developers. To enable this scenario, NoSQL databases which enable unstructured and heterogeneous data at scale storing have gained in popularity. NoSQL is a databases category that distinctly differs from SQL databases. NoSQL often is used to mean data management systems that are ‘Not SQL’ or an approach to data management which includes ‘Not only SQL’.

nosql-logo

There are many instances when, where and why NoSQL should be used. A general use of NoSQL is if data structures are not defined clearly at the time of making the system. If the model structure is centered largely on one or few model objects and most relationship actually are child objects of the major model objects. In this scenario, there is a fairly little need for actual joins. When it comes to caching, even if one would want to stick with an RDBMS as the main database, it could be useful to utilize a NoSQL database for query results caching or to keep data, like counters.

The two key attributes of NoSQL databases are scalability and flexibility. Although NoSQL databases have not quite reached the hype of Hadoop data management framework, they are drawing a lot of experimentation and attention. It’s important to choose wisely among the various NoSQL options, or trade-offs required to acquire flexibility and scalability. The databases are simple and affordable than their relational counterparts. The simplicity contributes to rapid development and performance at scale. Most, although not all NoSQL databases are open source, thus one could begin with a community software and adding a commercial support and also helpful commercial add-on modules as the deployment progresses. Since the biggest dissatisfaction with existing, databases arise from licensing terms and costs, open and free would look appealing to most Information Technology teams, particularly those that bootstrap a pilot project.

Typically, NoSQL is good for unstructured or schema-less data. NoSQL typically favors a denormalized schema because of no support for JOINS per the RDBMS environment. Thus, one would normally have a denormalized, flattened data representation. NoSQL use does not mean potentially losing data. Various DBS have different strategies, one could choose what level to trade off performance against a possible loss of data. Often, it’s very easy to scale-out NoSQL solutions. The system is seen as a key part of a new data stack supporting. When something is so massive that it should be distributed massively, NoSQL is the answer, although not all systems are targeting big. Bigness could be across a lot of dimensions, not only using plenty of disk space.

NoSQL provides fast key-value access. When latency is paramount, it is difficult to beat hashing on a key and reading value direct from memory or in as little as a single disk seek. Not every NoSQL product is about rapid access, there are some that are more about reliability, for instance. Nonetheless, what people wanted for a long time was a better cached and a lot of NoSQL systems provide this. NoSQL products support an array of new types of data. This is a major area of innovation of the system. Objects that are complicated could be stored easily with no need for plenty of mapping. Developers love to avoid complex schemas as well as ORM frameworks. Lack of structure enables more flexibility.

The schema-less-ness makes it easier dealing with schema migrations without much worry. Schemas are in a way dynamic, as they are imposed by the app at run-time, thus various parts of an app could have another view of the schema. Easier administration, maintainability, and operations are very product specific. However, a lot of NoSQL vendors try acquiring adoption through making easy for easy for developers to adopt them. They spend a lot of effort on usage ease, minimal administration and operations that are automated. This could lead to lower costs of operations since special code need not be written to scale a system that was never meant to be used that way.

NoSQL systems, since they have focused on scale, have the tendency to exploit partitions, not to use heavy stringent consistency protocols, and thus are well-positioned in operating in distributed instances. Generally, NoSQL systems are the only products with a ‘slider’ for selecting where they want to land on the CAP spectrum. Relational databases choose strong consistency, meaning that they could not tolerate a partition failure. In the end, this is a business decision and must be decided on a case to case basis.

14
Nov
2016
Bad coding practices that could destroy a software development project

In any coding task, the best coding practices are a set of informal rules that the software development community has learned in time which could help boost the quality of the program. A lot of the computer program remains useful for far longer than the original authors imagined, some even up to forty years or more. That is why; any rules must facilitate initial development and subsequent enhancement and maintenance by people aside from the original authors.

code-1084923_1280

Most of the time, developers do the appropriate thing. On rare instances that they don’t, bad things could occur. Avoiding these bad practices could make the work easier and makes the software more scalable and secure to boot. In the software programming and development field, the principle could be summarized by saying that most issues are caused by a few bad coding practices. Eliminating these could make the task more productive and much easier.

The following are the ten bad coding practices that could destroy any development project and must be avoided in order to create effective solutions.

1. TYPOS IN THE CODE

Surprisingly, these are common and quite maddening since they have nothing to do with the developer’s programming skill. Even so, a misspelled variable function or name could wreak havoc on cryptogram. Furthermore, they may not be easy to see. The solution would be to work in a good integrated development environment or IDE or a programmer-centric text editor even could significantly reduce errors. Another thing that could be done is to intentionally choose function and variable names which could be spelled easily, and thus easy to spot when misspelled. Refrain from using words like as receive, which could be misspelled recieve without being obvious.

2. FAILURE TO MODULARIZE CRYPTOGRAM

It is a good practice to write functions that perform one thing and that thing alone. This helps them keep short and so, easy to comprehend and maintain. Long functions have a lot of possible paths through them, which make them harder to test. A good rule of thumb would be that one function must occupy no more space than one screen. Another is it contains ten or more ‘if’ loops or statements, then it is too complex and must be rewritten.

3. HARD-CODING PASSWORDS

It is tempting to hardcode a secret password and account so one could get into the system later. But, this should not be done. Although it is extremely convenient, but it is also highly convenient for anyone to access the source cipher. The true issue is that a hardcode password eventually would be more widely known than intended. This makes it a big security risk, not to mention a very inconvenient fix.

4. FAILURE OF USING GOOD ENCRYPTION FOR DATA PROTECTION

Sensitive data must be encrypted as it travels over the network. This is because it is vulnerable to interception when it does so. It is not only a wonderful idea but a regulatory requirement, if not the law. This means sending data in the clear is a ‘no, no.’ Writing one’s own encryption system is difficult, thus it is necessary to use a proven industry standard encryption library and correctly use it.

5. NOT BEING ABLE TO FORMAT OR INDENT

The indenting and otherwise formatting code makes it easier to comprehend at a glance and so see errors. Moreover, it also makes it much easier for others to maintain it since it is presented in a consistent manner. When using an IDE that does not format the code automatically, consider running it via a code beautifier like Uncrustify. It would format it consistently based on the configured rules.

6. UNABLE TO THINK AHEAD

A programmer should think things ahead, such as what the project is for, how much it is expected to scale, the number of users it would have and how fast it should run. The answers to these may not be available, but if one fails to make estimates, then it is impossible to select a suitable framework to develop the app that would be able to meet the requirements.

7. LETTING THE IDE LURE ONE INTO A FALSE SENSE OF SECURITY

The IDEs and other tools which provide completion are great for productivity. They suggest variables as well as other things which are based on what is in scope, given what is being typed already. However, there is a danger with this kind of tool since a developer could choose something because it looks the same as one is expected without taking necessary effort to make sure that it is exactly what is needed. In essence, the device does the thinking, when a programmer, in fact, is making certain the thinking is correct. Nevertheless, there is a fine line to be drawn. The completion tools could help eliminate errors like typos and boot productivity.

8. ADDING PEOPLE FOR MAKING UP FOR TIME LOST

Almost each software project falls behind the given schedule. Adding people to the task to get it back on track will sound such a good idea in theory. Nevertheless, it is a common mistake. The fact is, adding new people to a task almost always results in a plunge in the overall productivity.

9. PREMATURELY OPTIMIZING A CODE

Donald Knuth, a popular and legendary programmer said that programmers waste a great amount of time worrying, or thinking about the speed of now- vital parts of the programs. Moreover, these attempts at efficiency actually have a robust negative effect when maintenance and debugging are taken into consideration. Being clever with the process could make it run infinitesimally faster. However, it makes it much harder to maintain and to debug. A better strategy would be to write clearly and then get to work in any parts which truly need optimization in order to enhance performance.

10. UTILIZING KNOWN BAD TIME ESTIMATES

It is also important to avoid the temptation to imagine that one could catch up with the schedule later without adding people to a project. If one falls behind the schedule, it is because the estimates were wrong. This means one should make a new estimate of the project length, not stick to an estimate blindly, one that has been proven wrong already.

SUMMARY

A professional software development company having certifications like CMMI Level 3 and Microsoft Gold Competency, follows standard processes for coding/development and quality assurance that ensures that any bad practices are avoided and best results are achieved.

20
Oct
2016
How Virtual Reality transforms the Hotel industry

Virtual reality is one of the hottest buzzwords in the hotel industry. It presents a new platform for engaging current and potential guests and marketing the best that a hotel has to offer in a fresh way.

One of the hottest buzzwords in the hospital industry and technology today is a Virtual Reality (VR). It is a natural progression following the web’s love of video and how it hopes to bring a realistic interactive video to life. The technology was first conceived by science fiction writers seven decades ago. Nonetheless, the term as know today was popularized during the 80’s by VPL Research that developed and sold the first VR technology. However, there was a more commercial appeal to developing the internet, and the fledgling industry turned its concentration toward equipment for flight simulators, medical, military training and auto design. Virtual Reality came into the spotlight fast as companies such as Samsung released new technology and FB announced they had a social virtual reality team. It comes as no surprise that everyone wants to talk about their plans for the future for the new technology.

virtual-reality-1

These days, from Hilton to Best Western to Marriott are focusing on custom software development for Hotels and building new experiences for guests. The virtual and augmented reality trend in the industry is in its infancy. With the present technology state, only big brands such as Hilton and Marriot would be able to capitalize on the experience. Nonetheless, it is expected that technological adoption will be quick and soon more widespread solutions will be available. It is fun and at the same time interesting to see all the new technology entering the hotel and hospitality environment. Virtual Reality isn’t new. Its roots go back over three decades. Early efforts were hampered severely by poor graphics, clunky headgear and very noticeable lag times. Often, the effect was unpleasant, even recently it has been a challenge if the hardware and software could not keep up. It is, however, fortunate that the hotel industry learned from those experiences and the technology has advanced dramatically.

For tourism and hotel industries, VR offers an attractive way of putting prospective guests inside their walls and beyond. One good example is Marriott International that took advantage of the technology with the ‘Travel Brilliantly’ campaign.

As part of the campaign, the hotel unveiled big booths at its New York City branch wherein guests can virtually walk the beaches Hawaii or climb to the top of London’s Tower 42. Along with sight and sound, users were immersed with heat, scents, and mist on their faces. The company is hoping that people would be inspired by the experience and decide to book a trip. Another aim of the campaign was helping the group to create credibility with the younger and usually more tech-savvy travellers.

Although the full extent of VR utility has yet to be seen, it looks like VR is only going to be more pervasive in the years to come. A sizeable part of potential customers would be walking around with their own VR systems. In the same manner that mobile devices have become a given, Virtual Reality messaging can potentially be the primary means of reaching the audience. Moreover, just like mobile technology, hoteliers interested in reaching prospective guests will have to find a way to make their message heard over the crowd’s roar. Concepts that are enticing one day would become commonplace the next, and those who could craft eye-catching, original content will be the voices that are heard.

Consumers need plenty of information on their travel purchases. Virtual Reality will have a tremendous impact on the sector. A traveller could put himself right there on the spot or in the activity to see if it meets his needs and wants. In the business, it is very important to set the right expectations, and VR and augmented reality could paint a clear picture. Simply put, VR will help leave a little to the imagination, but in the best possible way. At its core, the technology is entertaining and possibilities appear endless. Hotels and other hospitality companies could use VR to show people what local attractions and activities look and feel like. It could offer an unequalled example of what a guest could expect and takes away the uncertainty that travelers may have. When people travel, they are out of their element and VR could help make them feel more comfortable.

27
Sep
2016
Innovation and development in the Oil and Gas field

Innovation and development in the oil and gas field is vital nowadays. For the industry to continue to serve its purpose and to continue to be profitable, various technologies are needed to bring some very huge changes that provide advantages to both the industry and the people in the world.

16
Feb
2011
Why to prefer a CMMi certified Software Development Company?

Software development companies based at UK are emerging by the growing demand of Software development services. There are many small to big size companies which claim to have skilled resources to serve your software development requirements, but why to have only skilled resources is not enough for you when you are searching for the software development company? You are browsing several companies who can meet your business needs and assess them by taking quotations and reviewing their previous work. But how does a company guarantee you an unmatchable good quality solution and delivery on time? This is the point when you get interested in knowing the processes a software development company follows for developing and deploying a project.