“Small Chip – Big Future SSP The Integrated Secure Element”
This video is “short introduction text by you as you know better how to explain the rationale to them”.
Watch it now (click link below) and do not hesitate to share it widely!
“Small Chip – Big Future SSP The Integrated Secure Element”
This video is “short introduction text by you as you know better how to explain the rationale to them”.
Watch it now (click link below) and do not hesitate to share it widely!
Available digital services are growing quickly but they are often strangled by a lack of a trusted digital identity which would be available for everyone and everywhere. In daily activities, people cannot enjoy seamless service experience because a suitable digital identity infrastructure is not in place. Identity is broken on the Internet.
The problem of identity on the Internet can be attributed with several factors including the following:
In the absence of a standard ubiquitous digital identity framework, the identification processes used for different services on the Internet have evolved independently, leading to fragmented and inconsistent solutions with variable levels of security, assurance and usability.
Corporate Single Sign-On products are increasing and most of the solution providers consider that the same solution fits for the internet. It means that today, we have lots of incompatible competing products that are being marketed as a suitable identity solution on the Internet. These identity federation / SSO is sold as an authentication technology for the Internet but the level of assurance, identity, registration and authentication are bound together so that they work only in service provider silos.
Banks for example, still want to bind users to a special service technology within silos and no cooperation with other line of businesses is possible. This was the business model in the last century and it is hard to fit with the Internet age.
A uniform and simple user experience helps services to advice their users when assistance is needed. Currently we are far from this. Even the simplest authentication flow can be made far too complicated.
Various authentication methods provide only web session authentication technology. It means that Fintech transaction approval or legal signatures are still hard to implement.
Although, Regulators wants to fix the problems of identity, at least from a consumer’s security and privacy perspective. Regulations such as the EU’s GDPR means that the consequences of poor security are now high. Similarly, PSD2 requires payment transactions to use strong authentication.
However, solution providers still provide options and opportunities and they can define their solution scope as they want. No need to support GDPR or PSD2 regulations.
From our point of view, the key problem is in the identity schemes used on the Internet. The following figure depicts typical identity schemes (i.e. how identity and services are related). There are three schemes:
Given what we know, the question is how do we address the problem of broken identity on the Internet?
We think that siloed and centralized identity schemes are unnatural on the internet. The only scheme which fits the internet is the mesh scheme which allows free form relations between each party without restricting usage of the identity.
…..to be continued.
In cryptography we have things that are proven to be insecure, or proven to be secure, and a big collection of things that are neither. The PKCS#1 v1.5 signature security happens to be in this kind of middle ground, and has been there since 1998. Nobody has broken it, but nobody has either proven it to be secure in the forms that it is being used.
The 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS ’18) had paper exploring PKCS#1 v1.5 signature security. A preprint version of it is at https://eprint.iacr.org/2018/855. 
Bellare and Rogaway did show already in 1996, that when hash output size is equal to RSA key size, the signature scheme is as secure as the hash itself. Then they defined the RSA-PSS. 
Until now there has not been any kind of proof about security of the signatures with PKCS#1 v1.5 padding.
“But we already have RSA-PSS, and PKCS#1 v2.1 recommends use of that.“
In theory the RSA-PSS is a nice solution, but its support in SIM card environments is poorly specified, and thus implementations vary in detail. Nor is it widely expected on signature responses! Furthermore it really requires user certificates indicating that signatures are to be done in RSA-PSS format.
Like the paper notes, the RSA-PSS is very complicated thing while PKCS#1 v1.5 signatures has not been shown to be broken, and therefore everybody still uses basic PKCS#1 v1.5 signatures, and therefore the authors took another attempt at proving PKCS#1 v1.5 signature properties.
The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.
. . . .
Thus, even though our proofs do not immediately apply to PKCS#1 v1.5 when instantiated with standard hash functions, such as SHA-512, we show that it is still possible to instantiate PKCS#1 v1.5 signatures in a meaningful way, and based on standardized constructions, such as MGF1 from RFC 8017 or the XOFs SHAKE128 and SHAKE256 standardized by NIST in FIPS 202.
This new proof holds for a case where the hash size is at least half of the RSA modulus size (2048 bit modulus : 1024 bit hash).
External hashing with e.g. SHAKE256 producing 1024 bit hash output and signing it is provably as secure as that hash.
Proving that PKCS#1 v1.5 is safe with SHA-256 and 2048 bit RSA key size is still to be achieved, but it should be possible.
While a factual experience of “it has not been broken so far” on a popular thing like the PKCS#1 v1.5 signature is not a proof of anything, it is very much lessening the urgency of adopting RSA-PSS as a replacement.
It should be noted that PKCS#1 v1.5 encryption is breakable with chosen-ciphertext attack, and RSA-OAEP should be used instead.
On the other hand, if one has single use (“ephemeral”) 1024 bit RSA key, the chosen-ciphertext does not really matter and mass factoring of single use 1024 bit RSA public keys is still beyond anybody’s capabilities.
Have we really lost the SIM card security? Of course not.
Lately news has reported multiple SIM card attacks. In both of these cases the SIM card lets third parties to do undesirable things because SIM card security is (at least partially) disabled. .
A SIM card can have multiple applets, and it usually has at least applets for card management (RFM, RAM), but often many more. For these specific attacks to work, the SIM browser applet must have been configured with Minimum Security Level (MSL) setting of 00 (disabled). Anybody doing production cards should use minimum MSL 06 or 16 for all applets .
Having any applet with MSL 00 is acceptable for test SIM cards, but on field deployed cards it is an open invitation for just anybody to access that applet, and if that applet happens to be the card management (RFM, RAM) or some browser (WIB, S@T) to do unpredictable things. At minimum that is against operator’s business interests.
A long established mobile operator has typically long established relationships with SIM card suppliers, and there is no need to modify or even review old established card definitions, only add new ones for e.g. UMTS, LTE, 5G. An S@T-browser being one example of those old things. Therefore an old mistake is very likely to stay in latest card definitions.
This is a similar thing to having SIM cards using OTA keys of 56 bit DES type, which took a long while to get out of use, even after an attack against single-DES on SIM cards was demonstrated.
An overall card definition review is a good thing to do every now and then.
SIMalliance recommends implementing filtering at the network level, to intercept and block the illegitimate binary SMS messages. However, a lot of MNOs seem to find the “SIMalliance advice to filter illegitimate messages” unclear and have gone on to filter all binary SMS (i.e. with PID=7F, DCS=F6) messages destined to all users. Unconstrained filtering such as this will lead to a break in some MNO service which must then be fixed afterwards.
The SIMalliance recommendation thus has one major omission or clarification: All MNO’s OTA server’s (such as file management OTA, MSSP Service OTA, etc.) must be excluded from filtering. Hence, below are some additional SMSC specific supplements/clarifications on the general SIMalliance recommendation.
Note: Some MobileID systems (G&D WIB) sends Command packets in both Mobile Terminated and Mobile Originated direction. Do not block this traffic.
 AdaptiveMobile Security, 2019. Simjacker.
 Dan Goodin in Ars Technica, Sepetember 2019. SIMJACKER — Hackers are exploiting a platform-agnostic flaw to track mobile phone locations.
 Catalin Cimpanu in ZDNet, September 2019. New SIM card attack disclosed, similar to Simjacker.
 ETSI, July 2018. ETSI TS 102 225: Smart Cards; Secured packet structure for UICC based applications. Coding of the SPI first byte, and MSL setting. Add together:
02 = cryptographic checksum
04 = ciphering
10 = require counter incrementing
 SIMalliance, August 2019. Security guidelines for S@T Push.
 Security Research Labs, September 2019. New SIM attacks de-mystified.
The public key system has two accompanying data, the public key and private key. The private key is used to securely create an electronic signature (or decrypt a message). The key is held strictly in the security containment of the Signature Creation Device (SCD), where the signature is created. This is referred to as public key cryptography and public key signature.
The public key is used to verify signature (or encrypt messages), and it can be published to everyone (as the name “Public key” imply). In this way, anyone can verify the public key signature.
Both an applet on SIM (or eSIM) card and a smartphone application (App) can be used as a platform to create public key signatures. The key question today is “Can we trust the private key containment”?
Both platforms have had their share of annoying implementation mistakes. For example, there have been incidents where an external party has been able to easily determine or calculate the private key. Moreover, there have also been incidents where an outsider has been able to execute unauthorized commands on the platform.
These incidents are bugs only when they are found and reported. Is there such a thing as bug-free software which can guarantee that there will be no bugs in the future? Alternatively, we can ask how to fix or limit the possible harm that can be caused by a bug.
Let’s try to compare how security bugs are dealt with on these three platforms.
The App and/or smartphone vendor may publish software bug fixes.
Embedded SIM (eSIM)
The eSIM is an integral component of the phone. The eSIM has good security controls which can be used to limit bug effects.
The SIM card is proven, and has good security controls which can be used to limit bug effects
A high-end smart phone from a known vendor may be good when the phone model is less than three years old. However, nowadays, the trend is people are keeping their phones for longer, with fewer people interested or can afford to spend 800 euro on a phone every other year.
Similarly, budget smartphones manufacturers tend to save on everything possible. Low cost phones require low cost components, low cost manufacturing and logistics. For example, the security firmware required for secure containment of the private key such as the TEE is most likely the first expensive part to be dropped. Also, these phones or their firmware are generally made in countries which encourage installing backdoors in the devices to call home services for unspecified reasons.
So what are the most important factors that needs to be considered when deciding a secure public key solution for citizens? Based on our experience of smart phone markets, we can say that even though smartphone technology have developed a lot during the last ten years, their security has only been developed over the past few years.
All this means that there will be a significant amount of low cost phones. Security is as good as the weakest link. Therefore, we claim that the most important factors are:
This highlights that security is coming distant second after the price in the market place.
The cheapest devices for private key containment are discrete SIM cards. If SIM security has an implementation bug, like ROCA of 2017, restricting incident and replacing cards is an option that does not require replacing all of user’s phones or contacting multiple vendors or restricting the service.
What about TEE on smart phones? Could it solve the cost, security and maintainability challenge?
Today, many TEE implementations have not been used a lot. They may still have poorly implemented cipher primitives in them. ARM TEE implementations have been seen to use vulnerable security primitives allowing exfiltration of the security containment content, (with the most recent being: CVE-2018-11976 Qualcom TEE implementation).
These require vendor supplied patches, but on Android they stop being distributed after a few years nearly universally independent of the vendor. Even Apple have had this type of mistake in iPhones in the past, but even old iPhone models still get latest and greatest iOS releases with all patches.
Patch availability is subject to many things, mostly “that is so old a product, we don’t deliver new updates on it anymore”. But also commerce political insanity (see Huawei link.)
Similarly, rooting the phone overrides the inherent security of the TEE. Phone rooting (installing firmware that lets one to bypass security mechanisms) is easily able to replace the TEE implementation with a new one that does not report phone being rooted, and will serve internal key material just fine.
What if we could do all this security within the App itself? Today most of the identity Apps rely on Android/IOS keystorage and security libraries. Only a few Apps implement their own algorithms, fingerprint readers or face-recognition mechanisms. And when they do it, they require high-end smart phone platforms. Therefore, Apps themselves cannot solve the cost, security and maintainability challenge
The eSIM is a promising secure technology that does not suffer from the vulnerabilities of Smartphone App implementation, albeit it is only available on expensive high-end smartphones. With reduced chip costs, eSIMs along with SIMs promise the highest PKI security today and in the near future.
Methics moved on 2nd of May 2019.
The address for the new office is:
Lars Sonckin Kaari 14
Methics Oy has received the Strongest in Finland Platinum certificate from Finnish credit rating company Suomen Asiakastieto Oy.
IANA today registered provisional URI scheme “mss” published by Methics. The “mss” scheme defines the URI for SignatureProfiles used for requesting mobile signature services from the MSSP Platform. You can find the scheme from: https://www.iana.org/assignments/uri-schemes/prov/mss
The success of Mobile Signature Service (MSS) is reliant on a feature where an Application Provider (AP) does not need to know, in which mobile operator’s network a user can be found. This feature is called “signature roaming”.
If all reachable mobile users in various mobile networks do not have the same signature capabilities, it is hard for an AP to determine what kind of mobile signature to request from different mobile users. Especially when several different mobile signature services are available. The MSSP can define different mobile signature capabilities of mobile users, but AP should be able to indicate some kind of a signature type/quality to the MSSP. This capability is called the Signature Profile.
By using the ProfileQuery mechanism, the AP can request any mobile user’s Mobile Signature Profiles from a MSSP. Additionally, the AP can use any specific Signature Profile assigned to the user in the Mobile Signature Request.
Signature Profile is used in the following mobile signature service operations:
|Signature Request||The relying party (or AP) is requesting this signature profile to be used in the processing of the transaction|
|Signature Request: Signature Profile Comparison||If set to “exact”, then the MSSP is asked to match at least one of the specified <SignatureProfile> elements of the MSS_SignatureReq message exactly; If set to “minimum”, the MSSP is asked to use a profile that is at least as good as any specified in the <SignatureProfile> element; If set to “better”, the MSSP is asked to use any profile better than any that were supplied.|
|Signature Response||The MSSP chould make a statement of the Signature Profile that was used (optional).|
|ProfileQuery||One or several Mobile Signature profiles supported by this MSSP for the mobile user.|
Additionally, the Signature Profile is linked with two other profiles: Registration Profile and Certificate Profile. Registration Profile defines requirements for the registration process and Certificate Profile defines what kind of certificate is created.
ETSI TR 102 206 provides analysis of what is the meaning of the Signature Profile. Basically, the Signature Profile can be described by any means that is found appropriate by both parties (the MSSP and the AP), which can be defined by using URI representation. General requirements for the content of Signature Profiles can be defined as follows:
Information about the registration method and assurance such as:
Information on the Secure Signature Creation Device (SSCD) technical assurance:
Uniform Resource Identifier (URI) generic syntax has been defined in the IETF RFC 3986.
Conventionally Signature Profile URIs follow the “http URI-scheme (e.g. http://www.ficom.fi/something). Alternatively, there are a great number of other schemes available, which could be more descriptive than the http-scheme.
Today there are no semantic rules on how to construct the Signature Profile URI. Therefore, we need to consider what aspects could define a self-descriptive Signature Profile.
We have found four major aspects, which are:
By using these four characteristics, we can define existing Signature Profile URIs, these URIs are conveniently short and easy to understand.
Therefore, we recommend to use the new mss-scheme base URI, which would be constructed as follows:
mss:<Commercial service name>:PKI operation:LoA:SSCD
Cryptographic algorithms (RSA, ECC or key lengths) are not included in the base URI. Related details can be added as URI extensions using either query strings (for example “?ECC=true, AlgECC=NIST-256, RSA=true, RSAkeylen=2048”) or URI fragment (#ECC etc.)
It is easier for the AP that they do not need to know the MNO who provides the service. That means that the MSSP can redirect a mobile signature to the correct MNO and use the correct Signature Profile regardless of the number of available Signature Profiles. Therefore we recommend that if the commercial name differs then the MSSP can convert the commercial service name when roaming between MNOs.
Therefore, we recommend that the mobile signature functions, LoAs and SSCDs would be unified at least in same market areas. Basis for these definitions would be:
|LoA2||LoA2 [.extension]||Extensions may define additional|
|LoA3||LoA3 [.extension]||requirements like specify identity attributes|
|LoA4||LoA4 [.extension]||or registration mechanism|
|sign||Plain text signing|
|anonymous||Authentication||Restricts identity attributes (e.g. Age)|
|sim||UICC and SIM cards||Including eSIM etc|
|se||Integrated token (USB, SSP)||Removable token|
Conversion from different commercial names to a complete base URI may also be possible even though we do not recommend it. This would mean easily lots of confusions when the commercial names would need to define all mobile signature capabilities.
|Anonymous authentication Current URI||New URI|
|Signature of plain text content|
|Signature of digested content|
|Operator authentication service (operator’s internal service)|
(or use MNO’s name as the brand).
|SIM as SSCD|
|GSMA Mobile Connect|
|Mobile Connect LoA2|
|Mobile Connect LoA3|
|Smallest Signature Profiles which match every previous schemes|
Today, a person’s ability to prove their identity is seen as an important basis for participation in the society and life in general. In most countries around the world, establishing a person’s identity whether online or offline, is mandatory to access a wide range of services, including education, healthcare, voting, banking, mobile communications, housing, etc.
With the continuing shift from face-to-face interactions to Internet-based interactions in governance, business, and several other areas, the major challenge becomes “how do we ensure a reliable and trustworthy match between an online identity and a physical one?” In addition, as mobile devices become the primary and dominant device for communication and Internet access, another challenge is understanding “What strong electronic identity solutions could be implemented and how do such solutions support a mobile-first future?”
These constraints and questions demand solutions that are not only mobile-based but also provide the highest Levels of Assurance (LoA) to achieve a similar level of trust and acceptance as a trusted identity document used in the physical world.
In addition, as a stakeholder looking to implement a strong electronic identity solution, another decision gate is universality and interoperability of the solution. In the world today, we have seen several implementations fail due to lack of interoperability. The world is filled with identity silos which are not suitable for a truly global Internet where services and people are dispersed in several countries. With increasing service and person mobility as well as cross-border trade and collaboration, stakeholders must include the design of universality at the foundation of the solution.
Many electronic identity solutions have been implemented around the world including simple 2FA solutions (e.g. using OTP/TANs); symmetric cryptography based solutions such as GSMA’s Mobile Connect solution; PKI based solutions including those implemented on USB keys, physical e-ID cards, SIM cards (Mobile PKI), Server-side signing/Remote signing (where user PKI credentials are stored in a cloud-based HSM) and different implementations of software-based certificates (e.g. smartphone applications).
Technically, solutions that make use of PKI certificates, tamper-resistant hardware tokens and strong identity verification processes are found to be most secure and provide the highest levels of identity and authentication assurance. Therefore, tamper-resistance of the security token will continue to influence key choices in the design and implementation of strong electronic identity in the future.
On the other hand, not all PKI based implementations are particularly suited for use in mobile environments. A mobile-focused approach will take advantage of the reach, usability, built-in technologies and popularity of mobiles as primary communication devices used by citizens. Strong electronic identity solutions must, therefore, be designed to support available mobile technologies to succeed.
Mobile PKI solutions promise the most security for Mobile environments albeit with less flexibility based on today’s technologies. New developments in the mobile space such as the expected surge in the number of devices with tamper-resistant, PKI eUICC cards, promise a huge opportunity for the mass implementation and delivery of strong electronic identity to end users.
Other proposed solutions such as TEE-based electronic identity is another promising prospect for delivering strong electronic identity to citizens. Used with biometrics, it promises more flexibility to e-ID implementations, however technology standardization, and maturity of TEE implementations will take time.
ETSI defined Mobile Signature Service (MSS or more commonly Mobile ID/PKI) as “A Universal method using a Mobile device to confirm a Citizen’s intention to perform a Transaction.” At the core of Mobile PKI solutions is universality, allowing stakeholders to design electronic identity solutions that resolve the currently incompatible identity silos which encompass the Internet.
Given the maturity, available open standards and interoperability frameworks available, Mobile PKI solutions are well suited to provide a secure, tamper-resistant and universal strong electronic identity solution. Stakeholders will do well to adopt Mobile PKI solutions to implement mass market strong electronic identity solutions that will scale into the future.
The MSSP standards were originally defined at ETSI at the beginning of this millennium. One of the MSSP design key objectives was a secure system by design. This blog unwraps those underlying design principles.
General security principles such as confidentiality, integrity, and availability do not change over application area. These principles have been described in the OWASP Development Guide. Therefore, we have used these security principles in this blog.
Design decisions should be practical and easy to understand. However, we should question and strengthen the security design continuously. For example, it’s a good practice to implement data validation to include a validation routine for all forms of input and service access. However, it is more advanced to see data validation at each tier for any input, joined with appropriate error handling and access control based on demand.
To be able to protect a sensitive data it must be classified. The security controls are based on this classification. The key security assets in the MSSP solution have been divided into four groups:
Especially logs are often forgotten from the classification.
When we design controls to prevent the MSSP service abuse, we try to identify the most likely attackers:
The architects of the original MSSP standards at ETSI have constructed an MSSP design to adequately cover risks from both typical usage, and from extreme attack. The MSSP solution vendors are responsible to implement their solutions utilizing these principles and other relevant security principles.
The MSSP security architecture refers to the following fundamental pillars:
The MSSP must provide controls to protect the confidentiality of information, integrity of data, and provide access to the data when it is required (availability) – and only to the right users.
All provided services need to be considered from a security point of view:
The MSSP system architecture design shall provide security considerations in each and every new feature, how the risks are going to be mitigated, and what was actually done during implementation or deployment.
Security architecture starts on the day the business requirements are modeled, and never finishes until the last copy of your application is decommissioned. Security is a continuous integration process, not a one-shot task.
The MSSP security relies on the following general information security core principles:
The following MSSP security principles are derived from these three core principles.
The aim for MSSP security is to reduce the overall risk by reducing the attack surface area. Therefore,
MSSP solution itself contains extensive security mechanisms for initialization and administration of user security. There are no defaults for a user. Signature based authentication, certificate based identity management, revocation procedures, and validation interfaces provide the basis for the security.
All default credentials can be easily detected and altered at the MSSP solution at installation time. Additionally, the MSSP solution provides excellent tools to protect all application credentials against accidental disclosure or hostile out digging.
The principle of least privilege recommends that accounts have the least amount of privilege required to perform their business processes. This encompasses user rights, resource permissions such as CPU limits, memory, network, and file system permissions.
For example, if a middleware server only requires access to the network, read access to a database table, and the ability to write to a log, this describes all the permissions that should be granted. Under no circumstances should the middleware be granted administrative privileges.
The overall MSSP solution architecture follows the principle of least privilege which requires that every module (such as a process, a user, or a program) must be able to access only the information and resources that are necessary for its legitimate purpose.
Defense in depth is an information security principle in which multiple layers of security controls are placed throughout the MSSP system. The principle of defense in depth suggests that where one control would be reasonable, more controls that approach risks in different fashions are better. Controls, when used in depth, can make severe vulnerabilities extraordinarily difficult to exploit and thus unlikely to occur.
In the MSSP system, this principle takes the form of system splitting between AE, ME and HMSSP. Each MSSP implements its own security controls and only controlled messages flow through the system.
MSSP may fail to process transactions for many reasons. At the same time service should be user-friendly and secure. In many cases, the service tries to be extensively helpful and allow user mistakes as long as possible. Alternatively, the service could fail in every circumstance when some detail did not match.
The MSSP solution tries to implement high-security service with an excellent user experience. Therefore, the fail management has used the following principles:
Many MSSP deployments need to utilize the processing capabilities of third party partners like CA or Registration Agents, who may have differing security policies and posture than the MSSP operator. It is unlikely that the MSSP can influence or control any of these third parties.
Therefore, MSSP does not provide an implicit trust to any external systems. All external systems are treated in a similar fashion.
A key fraud control is separation of duties. For example, if the AP can send signature requests to the AE MSSP it cannot also send requests directly also to the HMSSP. This prevents the AP from requesting many MSSP simultaneously and claiming some requests never succeeded.
Certain AP roles like RA and CA have different levels of trust than normal Application Providers. In particular, AP accounts used for the service administration are different to normal APs. In general, normal APs do not have the same roles as any administrator.
Security through obscurity is a weak security control, and nearly always fail when it is the only control. This is not to say that keeping secrets is a bad idea, it simply means that the security of key systems should not be reliant upon keeping details hidden.
All MSSP interfaces are based on open standards and interfaces are well documented. No obscurity is used for any functionality. For example, the MSSP security does not rely upon knowledge of the source code or the specification being kept secret.
A practical example is Linux. Linux’s source code is widely available, and yet when properly secured, Linux is a hardy, secure and robust operating system.
Attack surface area and simplicity go hand in hand. MSSP developers have avoided the use of complex software architectures when a simpler approach or a design pattern is available.
For example, although it might be fashionable to use Java Enterprise Edition for business logic implementation, Java EE has been discarded in the Kiuru MSSP because it adds software complexity and opens unknown number of new attack surfaces for attackers.
Once a security flaw has been identified, it is important to understand the root cause of the issue and to prepare a test for the issue. When some specific design pattern has been used, it is likely that the same security issue is spread among all code base. The right software resolution, which does not a return to a former or less developed state, is needed.
In the Kiuru MSSP development process this means three things: