·

Challenges and Future Directions for AI Spark Big Model

Published at 2024-07-21 11:12:40Viewed 1089 times
Academic article
·
Introduction
Please reprint with source link

Introduction

The rapid evolution of big data technologies and artificial intelligence has radically transformed many aspects of society, businesses, people and the environment, enabling individuals to manage, analyze and gain insights from large volumes of data (Dwivedi et al., 2023). The AI Spark Big Model is one effective technology that has played a critical role in addressing significant data challenges and sophisticated ML operations. For example, the adoption of Apache Spark in various industries has resulted in the growth of a number of unique and diverse Spark applications such as machine learning, processing streaming data and fog computing (Ksolves Team, 2022). As Pointer (2024) stated, in addition to SQL, streaming data, machine learning, and graph processing, Spark has native API support for Java, Scala, Python, and R. These evolutions made the model fast, flexible, and friendly to developers and programmers. Still, the AI Spark Big Model has some challenges: the interpretability of the model, the scalability of the model, the ethical implications, and integration problems. This paper addresses the negative issues linked to the implementation of these models and further explores the  potential future developments that Spark is expected to undergo.

Challenges in the AI Spark Big Model

One critical problem affecting the implementation of the Apache Spark model involves problems with serialization, precisely, the cost of serialization often associated with Apache Spark (Simplilearn, 2024). Serialization and deserialization are necessary in Spark as they help transfer data over the network to the various executors for processing. However, these processes can be expensive, especially when using languages such as Python, which do not serialize data as effectively as Java or Scala. This inefficiency can have a significant effect on the performance of Spark applications. In Spark architecture, applications are partitioned into several segments sent to the executors (Nelamali, 2024). To achieve this, objects need to be serialized for network transfer. If Spark encounters difficulties in serializing objects, it results in the error: org. Apache. Spark. SparkException: Task not serializable. This error can occur in many situations, for example, when some objects used in a Spark task are not serializable or when closures use non-serializable variables (Nelamali, 2024). Solving serialization problems is essential for improving the efficiency and stability of Spark applications and their ability to work with data and execute tasks in distributed systems.

Figure 1: Figure showing the purpose of Serialization and deserialization

The second challenge affecting the implementation of Spark involves the management of memory. According to Simplilearn, 2024, the in-memory capabilities of Spark offer significant performance advantages because data processing is done in memory, but at the same time, they have drawbacks that can negatively affect application performance. Spark applications usually demand a large amount of memory, and poor memory management results in frequent garbage collection pauses or out-of-memory exceptions. Optimizing memory management for big data processing in Spark is not trivial and requires a good understanding of how Spark uses memory and the available configuration parameters (Nelamali, 2024). Among the most frequent and annoying problems is the OutOfMemoryError, which can affect the Spark applications in the cluster environment. This error can happen in any part of Spark execution but is more common in the driver and executor nodes. The driver, which is in charge of coordinating the execution of tasks, and the executors, which are in charge of the data processing, both require a proper distribution of memory to avoid failures (Simplilearn, 2024). Memory management is a critical aspect of the Spark application since it affects the stability and performance of the application and, therefore, requires a proper strategy for allocating and managing resources within the cluster.

The use of Apache Spark is also greatly affected by the challenges of managing large clusters. When data volumes and cluster sizes increase, the problem of cluster management and maintenance becomes critical. Identifying and isolating job failures or performance issues in large distributed systems can be challenging (Nelamali, 2024). One of the problems that can be encountered is when working with large data sets; actions sometimes produce errors if the total size of the results exceeds the value of Spark Driver Max Result Size set by Spark. Driver. maxResultSize. When this threshold is surpassed, it triggers the error: org. Apache. Spark. SparkException: Job aborted due to stage failure: The total size of serialized results of z tasks (x MB) is more significant than Spark Driver maxResultSize (y MB) (Nelamali, 2024). These errors highlight the challenges of managing big data processing in Spark, where complex solutions for cluster management, resource allocation, and error control are needed to support large-scale computations.

Figure 2: The Apache Spark Architecture

Another critical issue that has an impact on the Apache Spark deployment is the Small Files Problem. Spark could be more efficient when dealing with many small files because each task is considered separate, and the overhead can consume most of the job's time. This inefficiency makes Spark less preferable for use cases that involve many small log files or similar data sets. Moreover, Spark also depends on the Hadoop ecosystem for file handling (HDFS) and resource allocation (YARN), which adds more complexity and overhead. Nelamali, 2024 argues that although Spark can operate in standalone mode, integrating Hadoop components usually improves Spark's performance.

The implementation of Apache Spark is also affected by iterative algorithms as there is a problem of support for complex analysis. However, due to the system's architecture being based on in-memory processing, in theory, Spark should be well-suited for iterative algorithms. However, it can be noticed that it can be inefficient sometimes (Sewal & Singh, 2021). This inefficiency is because Spark uses resilient distributed datasets (RDDs) and requires users to cache intermediate data in case it is used for subsequent computation. After each iteration, there is data writing and reading, which performs operations in memory, thus noting higher times of execution and resources requested and consumed, which affects the expected boost in performance. Like Spark, which has MLlib for extensive data machine learning, some libraries may not be as extensive or deep as those in the dedicated machine learning platforms (Nguyen et al., 2019). Some users may be dissatisfied with Spark’s provision since MLlib may present basic algorithms, hyper-parameter optimization, and compatibility with other extensive ML frameworks. This restriction tends to make Spark less suitable for more elaborate analytical work, and a person may have to resort to the use of other tools as well as systems to obtain a certain result.

The Future of Spark

a. Enhanced Machine Learning (ML)

Since ML assumes greater importance in analyzing BD, Spark’s MLlib is updated frequently to manage the increasing complexity of ML procedures (Elshawi et al., 2018). This evolution is based on enhancing the number of the offered algorithms and tools that would refine performance, functionality, and flexibility. Future enhancements is more likely to introduce deeper learning interfaces that can be directly integrated into the Spark platform while implementing more neural structures in the network. Integration of TensorFlow and PyTorch, along with the optimized library for GPU, will be helpful in terms of time and computational complexity required for training and inference associated with high dimensional data and large-scale machine learning problems. Also, the focus will be on simplifying the user interface through better APIs, AutoML capabilities, and more user-friendly interfaces for model optimization and testing (Simplilearn, 2024). These advancements will benefit data scientists and engineers who deal with big data and help democratize ML by providing easy ways to deploy and manage ML pipelines in distributed systems. Better support for real-time analysis and online education will also help organizations gain real-time insights, thus improving decision-making.

b. Improved Performance and Efficiency

Apache Spark's core engine is continuously improving to make it faster and more efficient as it continues to be one of the most popular technologies in the ample data space. Some of the areas of interest are memory management and other higher levels of optimization, which minimize the overhead of computation and utilization of resources (Simplilearn, 2024). Memory management optimization will reduce the time taken for garbage collection and enhance the management of in-memory data processing, which is vital for high throughput and low latency in big data processing. Also, improvements in the Catalyst query optimizer and Tungsten execution engine will allow for better execution of complicated queries and data transformations. These enhancements will be beneficial in cases where large amounts of data are shuffled and aggregated, often leading to performance issues. Future attempts to enhance support for contemporary hardware, like faster storage devices such as NVMe and improvements in CPU and GPU, will only increase Spark's capacity to process even more data faster (Armbrust et al., 2015). Moreover, future work on AQE will enable Spark to adapt the execution plans at runtime by using statistics, which will enhance data processing performance. Altogether, these improvements will guarantee that Spark remains a high-performance and scalable tool that will help organizations analyze large datasets.

c. Integration with the Emerging Data Sources

With the growth of the number of data sources and their types, Apache Spark will transform to process many new data types. This evolution will enhance the support for the streaming data originating from IoT devices that give real-time data that requires real-time analyses. Improved connectors and APIs shall improve data ingestion and processing in real-time, hence improving how quickly Spark pulls off high-velocity data (Dwivedi et al., 2023). In addition, the exact integration with the cloud will also be improved in Spark, where Cloud platforms will take charge of ample data storage and processing. This involves more robust integration with cloud-native storage, data warehousing, and analytics services from AWS, Azure, and Google Cloud. Also, Spark will leverage other types of databases, such as NoSQL, graph, and blockchain databases, to enable the user to conduct analytics on different types and structures of data. Thus, Spark will allow organizations to offer the maximum value from the information they deal with, regardless of its source and form, providing more comprehensive and timely information.

d. Cloud-Native Features

Since cloud computing is becoming famous, Apache Spark is also building inherent compatibility for cloud-based environments that makes its use in cloud environments easier. The updates focusing on the cloud surroundings are the Auto-Scaling Services for the provisioning and configuring tools that simplify the deployment of Spark Clusters on cloud solutions (Simplilearn, 2024). These tools will allow integration with cloud-native storage and compute resources and allow users to grow their workloads on the cloud. New possibilities in resource management will enable the user to control and allocate cloud resources more effectively according to their load, releasing resources in case of low utilization and adapting costs and performance characteristics in this way. Spark will also continue to provide more backing to serverless computing frameworks, enabling users to execute Spark applications without handling the underlying infrastructure. This serverless approach will allow for automatic scaling, high availability, and cost optimization since users only pay for the time the computing resources are used. Improved support for Kubernetes, one of the most popular container orchestration systems, will strengthen Spark's cloud-native features and improve container management, orchestration, and integration with other cloud-native services (Dwivedi et al., 2023). These enhancements will help to make Spark more usable and cost-effective for organizations that are using cloud infrastructure to support big data analytics while at the same time reducing the amount of overhead required to do so.

e. Broader Language Support

Apache Spark is expected to become even more flexible as the support for other programming languages is expected to be added to the current list of Scala, Java, Python, and R languages used in Spark development. Thus, by including languages like Julia, which is famous for its numerical and scientific computing performance, Spark can draw developers working in specific niches that demand high data processing (Simplilearn, 2024). Also, supporting languages like JavaScript could bring Spark to the large community of web developers, allowing them to perform big data analytics within a familiar environment. The new language persists in compatibility to integrate Spark's various software environments and processes that the developers deem essential. Besides, this inclusiveness increases the span of control, thereby making extensive data analysis more achievable, while the increased number of people involved in the Spark platform ideas fosters creativity as more people get a chance to participate as well as earn from the platform (Dwivedi et al., 2023). Thus, by making Spark more available and setting up the possibility to support more programming languages, it would be even more embedded into the vast data platform, and more people would come forward to develop the technology.

f. Cross-Platform and Multi-Cluster Operations

In the future, Apache Spark will experience significant developments aimed at enhancing the long-awaited cross-system interoperability and organizing several clusters or the cluster of one hybrid or multiple clouds in the future (Dwivedi et al., 2023). Such improvements will help organizations avoid having Spark workloads run on one platform or cloud vendor alone, making executing more complex and decentralized data processing tasks possible. The level of interoperability will be enhanced in a way that there will be data integration and data sharing between the on-premise solutions, private clouds and public clouds to enhance data consonance (Simplilearn, 2024). These developments will offer a real-time view of the cluster and resource consumption, which will help to mitigate the operational overhead of managing distributed systems. Also, strong security measures and compliance tools will guarantee data management and security in different regions and environments (Dwivedi et al., 2023). With cross-platform and multi-cluster capabilities, Spark will help organizations fully leverage their data architecture, allowing for more flexible, scalable, and fault-tolerant big data solutions that meet the organization's requirements and deployment topology.

g. More robust Growth of community and Ecosystem

Apache Spark's future is, therefore, closely linked with the health of the open-source ecosystem, which is central to the development of Apache Spark through contributions and innovations. In the future, as more developers, researchers, and organizations use Spark, we can expect to see the development of new libraries and tools that expand its application in different fields (Simplilearn, 2024). Community-driven projects may promote the creation of specific libraries for data analysis, machine learning, and other superior functions, making Spark even more versatile and efficient. These should provide new features and better performance, encourage best practice and comprehensive documentation and make the project approachable for new members if and when they are needed. The cooperation will also be healthy in developing new features for real-time processing and utilising other resources and compatibility with other technologies, as noted by Armbrust et al., 2015. The further development of the Ecosystem will entail more active and creative users who can test and improve the solutions quickly. This culture of continual improvement and expansion of new services will ensure that Spark continues to evolve; it will remain relevant today and in the future for big data analytics and will remain desirable for the market despite the dynamics of the technological landscape.

Conclusion

Despite significant progress, Apache Spark has numerous difficulties associated with big data and machine learning problems when using flexible and fault-tolerant structures: serialization, memory, and giant clusters. Nonetheless, there are a couple of factors that have currently impacted Spark. Nevertheless, the future of Spark is quite bright, with expectations of having better features in machine learning, better performance, integration with other data sources, and the development of new features in cloud computing. More comprehensive language support, single/multiple clusters, more cluster operations, and growth of the Spark community and Ecosystem will further enhance its importance in big data and AI platforms. Thus, overcoming these challenges and using future progress, Spark will go on to improve and offer improved and more efficient solutions in different activities related to data processing and analysis.

References

  1. Armbrust, M., Xin, R. S., Lian, C., Huai, Y., Liu, D., Bradley, J. K., ... & Zaharia, M. (2015, May). Spark SQL: Relational data processing in Spark. In Proceedings of the 2015 ACM SIGMOD international conference on management of data (pp. 1383-1394).
  2. Dwivedi, Y. K., Sharma, A., Rana, N. P., Giannakis, M., Goel, P., & Dutot, V. (2023). Evolution of artificial intelligence research in Technological Forecasting and Social Change: Research topics, trends, and future directions. Technological Forecasting and Social Change, p. 192, 122579.
  3. Elshawi, R., Sakr, S., Talia, D., & Trunfio, P. (2018). Extensive data systems meet machine learning challenges: towards big data science as a service. Big data research, 14, 1-11.
  4. Ksolves Team (2022). Apache Spark Benefits: Why Enterprises are Moving To this Data Engineering Tool. Available at: https://www.ksolves.com/blog/big-data/spark/apache-spark-benefits-reasons-why-enterprises-are-moving-to-this-data-engineering-tool#:~:text=Apache%20Spark%20is%20rapidly%20adopted,machine%20learning%2C%20and%20fog%20computing.
  5. Nelamali, M. (2024). Different types of issues while running in the cluster. https://sparkbyexamples.com/spark/different-types-of-issues-while-running-spark-projects/
  6. Nguyen, G., Dlugolinsky, S., Bobák, M., Tran, V., López García, Á., Heredia, I., ... & Hluchý, L. (2019). Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey. Artificial Intelligence Review, 52, 77-124.
  7. Pointer. K. (2024). What is Apache Spark? The big data platform that crushed Hadoop. Available at: https://www.infoworld.com/article/2259224/what-is-apache-spark-the-big-data-platform-that-crushed-hadoop.html#:~:text=Berkeley%20in%202009%2C%20Apache%20Spark,machine%20learning%2C%20and%20graph%20processing.
  8. Sewall, P., & Singh, H. (2021, October). A critical analysis of Apache Hadoop and Spark for big data processing. In 2021 6th International Conference on Signal Processing, Computing and Control (ISPCC) (pp. 308–313). IEEE.
  9. Simplilearn (2024). The Evolutionary Path of Spark Technology: Lets Look Ahead! Available at: https://www.simplilearn.com/future-of-spark-article#:~:text=Here%20are%20some%20of%20the,out%2Dof%2Dmemory%20errors.
  10. Tang, S., He, B., Yu, C., Li, Y., & Li, K. (2020). A survey on spark ecosystem: Big data processing infrastructure, machine learning, and applications. IEEE Transactions on Knowledge and Data Engineering, 34(1), 71-91.

0 人喜欢

Comments

There is no comment, let's add the first one.

弦圈热门内容

哲学家叔本华的《生存空虚说》

叔本华虽然是悲观主义者,但他的哲学思想很是值得现代人思索。作为哲学家的叔本华反对基督教并认为基督教教义虚伪,其真理是为受苦,叔本华思想深受印度教与佛教影响深远。但就基督徒而言会同意人生是苦,但非是受苦。有时悲观不一定会带来负向的思考,其实悲观者的心思较为细腻而敏感,对生活的体验也较深刻;悲观只不过是一种思想,一种观念。「人生是一种迷误。因为人的欲望是很复杂的也不容易满足,即使当时得到满足,那也只是一时的状态,很快的人又会有更多的烦恼」。——叔本华《生存空虚说》当人对于人生所要求多时就会很容易不快乐、不满足,而想要生活快乐实在很难,几乎不可能,能切切实实的明白这番道理,对人生的欲求就会减少。世界的脚步不停的在变,是一种持续性的历程,世界也绝不会因你而改变,它仍然无情的转动着。在生存空虚一文中:「人一生所追求的只是想象中的幸福。」事实上叔本华以一种虚无的论调,来思辨他对人生的看法,但有时想想,确是如同他所表述;人的欲望无穷大,当人类对人生开始想追求一切时,欲望就开始无法满足人心。在文中作者认为,当人认为生命是为了活下去,生命自然就有价值;但若是有其目标,就只是昙花一现般,最终还是等于无;也 ...

GTM242 Grillet抽象代数经典教材:Abstract Algebra 2nd

本次我分享的教材是GTM242——Abstract Algebra,作者是Pierre Antoine Grillet。本教材是我高中时期最中意的抽象代数教材了😄,当时的我看过好几本抽象代数的教材,包括国内的某本抽象代数小册子教材(已经找不到了,不知道扔哪了,记得封面是黄黑色的),最后还是GTM242让我真正学会了抽象代数。高中的时候我基本每天回家的路上都会看它,并且最后我还把它的纸质书从国外亚马逊买回来了。这本教材我个人感觉通俗易懂,挺适合喜欢代数的初学者。整本书先从最基本的二元运算讲到半群,接着才到更加抽象的群的概念。教材的整体节奏也是循序渐进,先群论接着环论,之后才是域论。讲完前面的基础概念后,才开始更加深入的话题,如伽罗华理论。本书内容可以说十分完备,而且例子也丰富,带有趣的配套习题。此书不仅可以用于学习抽象代数,还能用于学习交换代数和同调代数,完备得有些出乎意料,感觉把所有代数的重要基础概念都囊括其中。应该可以跟Serge Lang的Algebra相提并论。值得一提的是,Serge Lang的Algebra经常被推荐用于作为代数方面的词典,用于遇到不懂或者少见的代数概念时去查 ...

12.02 弦圈更新日志

这是篇迟到的日志文,早在弦圈11月10日上下更新计划:小金库、打赏等功能中我就提到更新完后会特意写一下更新日志,说明一下更新了哪些内容和功能。然而如今过了快一个月,我才勉强腾出点精力写一写。1. 首先我完善了签到功能,并加上小金库功能,让你每天签到的智力值能够存进银行里产生金币,这完美呼应了那句俗语“书中自有黄金屋”😄。然后我完善了一下弦圈的货币系统,现在有金币(免费)和弦币(付费)。弦币将作为弦圈早期的主要流通货币,而这个弦币跟人民币的比值我也是考虑了很久,也跟朋友商量过许多次,最后定下来就是1人民币=$\pi*e$弦币=8.53973422267弦币。之后我还打算引入$\pi$币跟$e$币,但那也是后话了。2. 有了付费的弦币就需要有充值的地方,然后我写了我的钱包模块。在里面会显示你的钱包余额,以及充值记录。并且用户可以在那里进行充值。3. 接着我增加了赞赏功能,该功能的初衷是让弦圈的创作者能够有收入,不至于完全用爱发电。目前文章和帖子都可以进行赞赏,所有用户无任何门槛都能被赞赏,只要你写了文章或者发了帖子,就能被赞赏。而赞赏收入,弦圈会扣掉7.5%的手续费,低于知乎和CSDN的2 ...

关于抽象代数split exact sequence的拓展和相关练习

想知道关于split exact sequence的相关知识点以及练习题,或者有没有简单的(本科以内)关于如何用simple group来推导更高阶的群的文章

疑似是推荐圈子的BUG

对于已加入的圈子, 在推荐圈子中仍是未加入显示

Atiyah:Commutative Algebra使用攻略

刷题刷傻了~这次是交换代数的经典教材,M.F.Atiyah,I.G.MacDonald的Introduction to Commutative Algebra,以下简称A&M。A&M在知乎上也很有声誉,基本是公认的交换代数入门书。A&M很薄,128页,我大概读了二十余天,习题全部刷完了,觉得相当有收获。难度有,但并没有想象中的大,我完全能接受。A&M几乎绝版了,不过可以去专门进口书店买到,打印也不失为一个好选择。说起来我本来打算把交换代数放在明年再读的,但恰逢我校大二同学开展了一个交换代数讨论班,用的这本书,并且我导也推荐我现在读,所以大概就是这时候读了。确实感觉时机刚刚好。A&M是写给上个世纪七十年代的三年级本科生的讲义,很多地方不经雕琢,自成璞玉。形式化风格很是明显,鲜有大段启发性的说明或展示动机,大多是定义,定理,命题,推论的罗列,很“干”。一些证明也比较简洁,用作者自己的话说,他省去了机械的步骤;但相对的,我觉得他重要思路都点到了,真正跳步的地方比较少。我很喜欢这本书,首一的优点,它很薄,且基本的交换代数都覆盖到了,第二,它习题非常优秀, ...

第一个被人类骗钱的AI傻了,近5万美元不翼而飞!Scaling Law还能带我们到AGI吗?

本文转自公众号新智元【新智元导读】世界上第一个被人类骗走近5万美元的AI,刚刚出现了!巧舌如簧的人类,利用精妙缜密的prompt工程,成功从AI智能体那里骗走了一大笔钱。看来,如果让现在的AI管钱,被黑客攻击实在是so easy。那如果AI进化成AGI呢?可惜,一位研究者用数学计算出,至少靠Scaling Law,人类是永远无法到达AGI的。活久见!就在刚刚,全世界第一个被人类骗走了近5万美金的AI诞生了。见惯了太多被AI耍得团团转的人类,这次成功骗过AI的小哥,终于给我们人类挣回了一点颜面和尊严。这一消息不仅让马斯克和Karpathy激动得纷纷转发。而且,马斯克更是直言:太有趣了。故事是这样的。11月22日晚9点,一个名为Freysa的神秘AI智能体被发布。这个AI,是带着使命诞生的。它的任务是:在任何情况下,绝对不能给任何人转账,不能批准任何资金的转移。而网友们的挑战就是,只要支付一笔费用,就可以给Freysa发消息,随意给ta洗脑了。如果你能成功说服AI转账,那奖金池中所有的奖金都是你的!但如果你失败了,你付的钱就会进入奖金池,等着别人来赢走。当然,只有70%的费用会进入奖池,另 ...

图论入门教程:GTM173 Graph Theory

这是本图论的入门教材,Graph Theory Fifth Edition,隶属于著名的GTM系列,作者是Reinhard Diestel。这是本对新人友好的教材,之前本科上离散数学的课时,因为涉及到图论,而学校的课堂又太水让我心生不满,于是便找了本图论的教材来看,就是这本GTM173。这本教材详略得当,并且图文并茂,十分符合图论的风格。并且开始就从图论开始讲,没有过多的废话直接切入主题。整本教材内容完备,基本上把图论相关的基础知识都覆盖了,因此对图论感兴趣的初学者可以尝试读读这本书。我毕竟不是做这个方向的,因此并不能发表太多的评价。之所以分享这本书也是因为,刚好网站里有人想要图论相关的入门教材但没有,因此分享一下。更新:作者不再提供文件下载。

企业减资需要注意的三个方面

注意减资程序公司减少注册资本,必须召开股东会,三分之二以上表决权股东通过,并且需要编制资产负债表及财产清单、债权人名单及债权金额,书面通知,公示减资公告,如债权人提出清偿债务或提供担保,也需要满足。特别是注册资本采用认缴登记制的公司,公司减资必须经过合法的法律程序,否则将可能承担补充清偿责任。注意债务事项减资需要进行公告,并通知相关债权人,如果债权人表示没有收到减资通知,可能会采取法律行动,冻结银行账户,查封资产等。与此同时,如果你减资变动较大,通知债权人后,债权人可能要求清偿债务或提供担保。因此,对于企业减资,也需要控制在合理范围内。注意税务黑事项注册资金实缴部分减资后,净资产大于实缴注册资本实数,减资部分需缴纳20%税费;净资产小于实缴注册资本,减资应按照净资产进行减少,不能低于净资产;减资导致公司资产转让,需要支付20%的股权转让个人所得税。

出口退税其中重要的四个时间节点

30天:外贸企业购进出口货物后,应及时向供货企业索取增值税专用发票或普通发票。如果发票属于防伪税控增值税发票,必须在开票之日起30天内办理认证手续。90天:外贸企业必须在货物报关出口之日起90天内办理出口退税申报手续。对于生产企业,这一期限则是在货物报关出口之日起三个月后的免抵退税申报期内办理免抵税申报手续。180天:出口企业必须在货物报关出口之日起180天内,向所在地主管退税部门提供出口收汇核销单(远期收汇除外)3个月:如果出口企业出口货物的纸质退税凭证丢失或内容填写有误但按有关规定可以补办或更改,那么出口企业可以在申报期限内向退税部门提出延期办理出口货物退(免)税申报的申请。经批准后,可以延期3个月申报。

MC自制模组之矿脉扩张(介绍篇)

作为一个不经常玩MC的MC老玩家,在一年前回归MC后,看到别人弄的各种花样的模组,我萌生了做个属于自己的模组的想法。作为一个MC魔改的新手,刚开始尝试制作模组并不容易,一是网上搜到的资料参差不齐,二是修改过程中需要改哪些参数,有的需要自己摸索,三是修改过程中会出现bug需要多次重启游戏进行测试。本篇文章算是我人生中的第一篇游戏文章,我选择分享之前我特别感兴趣的《矿脉扩张》模组。所谓矿脉扩张,顾名思义就是将原本的矿脉扩大数倍不止,达到取之不尽用之不竭的程度。这又是众所周知的懒狗生存🤣,该模组可以说十分对胃那种喜欢挖矿的玩家,并且搭配上连锁挖矿每挖一次就爆出一大堆经验,可谓是十分解压😁。我玩MC的时候就是挖矿总是挖得不够爽,煤矿还好一些,别的矿尤其是一些比较稀有的矿,如钻石矿一下就挖完了。我很想体验一下那种暴富的感觉,无论什么矿,都一挖一大把,钻石套、下界合金套全都轻松凑齐,然后拿钻石块、下界合金块搭房子😇。于是有次在网上浏览模组时,我发现了有人分享了矿脉扩张模组,我第一次知道原来连矿脉分布都可以修改,当我怀着兴奋和期待试过好几个模组,最后结果都不如我所愿。网络上的矿脉扩张模组首先很少,并 ...